Feb 02 10:31:57 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 10:31:57 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:57 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:31:58 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 10:31:59 crc kubenswrapper[4845]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:31:59 crc kubenswrapper[4845]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 10:31:59 crc kubenswrapper[4845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:31:59 crc kubenswrapper[4845]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:31:59 crc kubenswrapper[4845]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 10:31:59 crc kubenswrapper[4845]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.419062 4845 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.423921 4845 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.423953 4845 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.423963 4845 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.423972 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.423981 4845 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.423990 4845 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.423997 4845 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424009 4845 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424019 4845 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424029 4845 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424038 4845 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424061 4845 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424070 4845 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424079 4845 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424088 4845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424097 4845 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424106 4845 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424113 4845 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424121 4845 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424130 4845 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424138 4845 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424146 4845 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424154 4845 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424162 4845 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424170 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424177 4845 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424185 4845 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424194 4845 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424202 4845 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424211 4845 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424219 4845 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424228 4845 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424236 4845 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424244 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424252 4845 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424263 4845 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424272 4845 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424281 4845 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424293 4845 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424303 4845 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424311 4845 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424319 4845 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424326 4845 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424334 4845 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424344 4845 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424353 4845 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424361 4845 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424369 4845 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424377 4845 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424384 4845 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424392 4845 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424400 4845 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424407 4845 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424415 4845 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424422 4845 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424433 4845 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424442 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424452 4845 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424461 4845 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424469 4845 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424481 4845 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424489 4845 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424497 4845 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424504 4845 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424512 4845 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424519 4845 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424527 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424535 4845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424542 4845 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424550 4845 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.424557 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427153 4845 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427181 4845 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427198 4845 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427209 4845 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427222 4845 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427234 4845 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427245 4845 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427286 4845 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427295 4845 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427304 4845 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427314 4845 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427323 4845 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427333 4845 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427341 4845 flags.go:64] FLAG: --cgroup-root="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427350 4845 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427359 4845 flags.go:64] FLAG: --client-ca-file="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427368 4845 flags.go:64] FLAG: --cloud-config="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427376 4845 flags.go:64] FLAG: --cloud-provider="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427385 4845 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427394 4845 flags.go:64] FLAG: --cluster-domain="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427403 4845 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427413 4845 flags.go:64] FLAG: --config-dir="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427421 4845 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427432 4845 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427443 4845 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427452 4845 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427461 4845 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427471 4845 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427479 4845 flags.go:64] FLAG: --contention-profiling="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427488 4845 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427497 4845 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427506 4845 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427515 4845 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427525 4845 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427534 4845 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427543 4845 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427552 4845 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427562 4845 flags.go:64] FLAG: --enable-server="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427571 4845 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427582 4845 flags.go:64] FLAG: --event-burst="100" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427593 4845 flags.go:64] FLAG: --event-qps="50" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427602 4845 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427612 4845 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427621 4845 flags.go:64] FLAG: --eviction-hard="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427633 4845 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427642 4845 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427651 4845 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427660 4845 flags.go:64] FLAG: --eviction-soft="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427669 4845 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427677 4845 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427686 4845 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427694 4845 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427703 4845 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427711 4845 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427720 4845 flags.go:64] FLAG: --feature-gates="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427731 4845 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427740 4845 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427749 4845 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427758 4845 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427767 4845 flags.go:64] FLAG: --healthz-port="10248" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427776 4845 flags.go:64] FLAG: --help="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427785 4845 flags.go:64] FLAG: --hostname-override="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427794 4845 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427803 4845 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427812 4845 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427821 4845 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427829 4845 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427838 4845 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427847 4845 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427856 4845 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427865 4845 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427874 4845 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427883 4845 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427917 4845 flags.go:64] FLAG: --kube-reserved="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427926 4845 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427934 4845 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427944 4845 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427953 4845 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427962 4845 flags.go:64] FLAG: --lock-file="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427970 4845 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427979 4845 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.427988 4845 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428000 4845 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428009 4845 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428018 4845 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428026 4845 flags.go:64] FLAG: --logging-format="text" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428035 4845 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428045 4845 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428053 4845 flags.go:64] FLAG: --manifest-url="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428062 4845 flags.go:64] FLAG: --manifest-url-header="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428073 4845 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428082 4845 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428092 4845 flags.go:64] FLAG: --max-pods="110" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428101 4845 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428110 4845 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428119 4845 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428127 4845 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428136 4845 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428145 4845 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428154 4845 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428173 4845 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428182 4845 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428191 4845 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428200 4845 flags.go:64] FLAG: --pod-cidr="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428209 4845 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428233 4845 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428244 4845 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428255 4845 flags.go:64] FLAG: --pods-per-core="0" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428266 4845 flags.go:64] FLAG: --port="10250" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428277 4845 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428286 4845 flags.go:64] FLAG: --provider-id="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428294 4845 flags.go:64] FLAG: --qos-reserved="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428304 4845 flags.go:64] FLAG: --read-only-port="10255" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428312 4845 flags.go:64] FLAG: --register-node="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428322 4845 flags.go:64] FLAG: --register-schedulable="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428331 4845 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428345 4845 flags.go:64] FLAG: --registry-burst="10" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428353 4845 flags.go:64] FLAG: --registry-qps="5" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428362 4845 flags.go:64] FLAG: --reserved-cpus="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428372 4845 flags.go:64] FLAG: --reserved-memory="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428382 4845 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428391 4845 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428400 4845 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428408 4845 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428417 4845 flags.go:64] FLAG: --runonce="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428426 4845 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428435 4845 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428444 4845 flags.go:64] FLAG: --seccomp-default="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428452 4845 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428461 4845 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428470 4845 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428479 4845 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428489 4845 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428497 4845 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428506 4845 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428515 4845 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428524 4845 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428533 4845 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428542 4845 flags.go:64] FLAG: --system-cgroups="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428550 4845 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428564 4845 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428572 4845 flags.go:64] FLAG: --tls-cert-file="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428581 4845 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428590 4845 flags.go:64] FLAG: --tls-min-version="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428599 4845 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428608 4845 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428616 4845 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428625 4845 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428635 4845 flags.go:64] FLAG: --v="2" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428646 4845 flags.go:64] FLAG: --version="false" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428658 4845 flags.go:64] FLAG: --vmodule="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428669 4845 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.428678 4845 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428877 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428911 4845 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428920 4845 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428928 4845 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428936 4845 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428944 4845 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428952 4845 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428960 4845 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428969 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428977 4845 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428986 4845 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.428994 4845 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429002 4845 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429018 4845 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429026 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429034 4845 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429042 4845 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429049 4845 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429057 4845 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429065 4845 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429072 4845 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429080 4845 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429088 4845 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429096 4845 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429104 4845 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429111 4845 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429119 4845 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429126 4845 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429136 4845 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429145 4845 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429154 4845 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429170 4845 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429180 4845 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429188 4845 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429197 4845 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429205 4845 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429213 4845 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429221 4845 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429228 4845 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429236 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429244 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429251 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429259 4845 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429266 4845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429276 4845 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429289 4845 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429298 4845 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429306 4845 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429316 4845 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429326 4845 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429337 4845 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429346 4845 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429355 4845 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429364 4845 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429372 4845 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429380 4845 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429387 4845 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429395 4845 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429402 4845 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429410 4845 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429418 4845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429425 4845 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429433 4845 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429440 4845 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429448 4845 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429456 4845 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429464 4845 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429473 4845 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429480 4845 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429488 4845 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.429498 4845 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.430849 4845 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.444424 4845 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.444487 4845 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444610 4845 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444624 4845 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444631 4845 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444637 4845 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444644 4845 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444650 4845 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444658 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444666 4845 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444673 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444681 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444689 4845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444695 4845 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444701 4845 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444708 4845 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444713 4845 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444718 4845 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444723 4845 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444728 4845 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444733 4845 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444740 4845 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444748 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444754 4845 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444760 4845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444767 4845 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444772 4845 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444779 4845 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444783 4845 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444789 4845 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444795 4845 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444801 4845 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444807 4845 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444812 4845 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444819 4845 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444825 4845 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444832 4845 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444839 4845 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444846 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444852 4845 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444858 4845 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444864 4845 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444871 4845 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444878 4845 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444910 4845 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444916 4845 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444922 4845 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444930 4845 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444936 4845 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444942 4845 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444947 4845 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444952 4845 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444958 4845 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444963 4845 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444969 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444975 4845 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444980 4845 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444985 4845 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444990 4845 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.444995 4845 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445000 4845 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445005 4845 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445010 4845 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445015 4845 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445022 4845 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445030 4845 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445036 4845 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445042 4845 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445046 4845 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445051 4845 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445057 4845 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445064 4845 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445070 4845 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.445080 4845 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445263 4845 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445275 4845 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445284 4845 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445291 4845 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445297 4845 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445303 4845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445308 4845 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445314 4845 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445320 4845 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445328 4845 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445333 4845 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445339 4845 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445345 4845 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445350 4845 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445358 4845 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445363 4845 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445368 4845 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445373 4845 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445378 4845 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445384 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445389 4845 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445394 4845 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445399 4845 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445405 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445411 4845 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445416 4845 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445422 4845 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445427 4845 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445432 4845 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445438 4845 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445444 4845 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445449 4845 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445454 4845 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445461 4845 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445467 4845 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445473 4845 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445478 4845 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445484 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445490 4845 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445495 4845 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445503 4845 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445508 4845 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445515 4845 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445520 4845 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445526 4845 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445532 4845 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445538 4845 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445543 4845 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445548 4845 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445553 4845 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445558 4845 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445563 4845 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445569 4845 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445574 4845 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445579 4845 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445585 4845 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445590 4845 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445595 4845 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445600 4845 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445605 4845 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445610 4845 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445616 4845 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445621 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445627 4845 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445632 4845 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445637 4845 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445644 4845 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445650 4845 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445656 4845 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445661 4845 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.445667 4845 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.445676 4845 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.445938 4845 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.450999 4845 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.451117 4845 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.453127 4845 server.go:997] "Starting client certificate rotation" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.453166 4845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.453398 4845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-09 18:53:15.651154807 +0000 UTC Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.453574 4845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.488660 4845 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.488713 4845 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.491302 4845 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.509984 4845 log.go:25] "Validated CRI v1 runtime API" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.553529 4845 log.go:25] "Validated CRI v1 image API" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.556553 4845 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.565089 4845 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-10-27-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.565156 4845 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.596525 4845 manager.go:217] Machine: {Timestamp:2026-02-02 10:31:59.592541207 +0000 UTC m=+0.683942737 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a0f7ad40-dfc7-4c48-b08f-9dc9799ca728 BootID:8f18ce78-9cc3-4dbd-9d49-5987790a156d Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:07:db Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:20:07:db Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d9:70:55 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9c:d1:3f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d2:9e:7c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f5:c4:55 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:ec:c3:07:d8:6f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:33:a6:ce:a0:af Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.597001 4845 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.597282 4845 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.599394 4845 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.599745 4845 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.599811 4845 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.600253 4845 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.600276 4845 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.601014 4845 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.601072 4845 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.602203 4845 state_mem.go:36] "Initialized new in-memory state store" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.602358 4845 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.607733 4845 kubelet.go:418] "Attempting to sync node with API server" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.607773 4845 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.607815 4845 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.607839 4845 kubelet.go:324] "Adding apiserver pod source" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.607860 4845 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.613569 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.613683 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.613683 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.613770 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.617292 4845 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.618490 4845 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.620242 4845 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622087 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622131 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622147 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622163 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622186 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622201 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622215 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622237 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622252 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622267 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622286 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.622300 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.623332 4845 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.624032 4845 server.go:1280] "Started kubelet" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.625143 4845 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.625250 4845 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.625982 4845 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.626341 4845 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 10:31:59 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.627558 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.627608 4845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.627852 4845 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.627936 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:03:27.718556966 +0000 UTC Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.628170 4845 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.628191 4845 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.628241 4845 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.629030 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.629165 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.629281 4845 factory.go:55] Registering systemd factory Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.629321 4845 factory.go:221] Registration of the systemd container factory successfully Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.630963 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="200ms" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.631190 4845 factory.go:153] Registering CRI-O factory Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.631226 4845 factory.go:221] Registration of the crio container factory successfully Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.631371 4845 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.631420 4845 factory.go:103] Registering Raw factory Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.631453 4845 manager.go:1196] Started watching for new ooms in manager Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.636858 4845 server.go:460] "Adding debug handlers to kubelet server" Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.636300 4845 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18906760f191cc19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:31:59.623990297 +0000 UTC m=+0.715391777,LastTimestamp:2026-02-02 10:31:59.623990297 +0000 UTC m=+0.715391777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.638620 4845 manager.go:319] Starting recovery of all containers Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.654622 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655242 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655269 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655291 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655310 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655329 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655349 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655377 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655402 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655419 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655436 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655454 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655472 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655500 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655519 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655546 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655601 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655623 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655641 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655661 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655678 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655697 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655716 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655733 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655750 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655772 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655795 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655816 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655874 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655924 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.655971 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656013 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656040 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656066 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656086 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656106 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656124 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656146 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656168 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656186 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656208 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656229 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656249 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656269 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656291 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656311 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656332 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656354 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656376 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656396 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656415 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656435 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656463 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656483 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656508 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656528 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656548 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656569 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656589 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656611 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656631 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656651 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656671 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656693 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656712 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656733 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656752 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656774 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656793 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656814 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656835 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656857 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656878 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656927 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656946 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656966 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.656987 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657013 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657043 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657073 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657095 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657115 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657136 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657158 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657178 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657199 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657223 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657243 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657261 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657281 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657299 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657318 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657337 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657356 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657374 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657396 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657416 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657435 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657454 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657473 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657493 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657514 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657533 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657553 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657585 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657607 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657627 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657649 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657671 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657691 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.657714 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661184 4845 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661245 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661271 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661293 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661316 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661339 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661361 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661384 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661406 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661428 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661450 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661473 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661495 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661516 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661537 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661561 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661587 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661606 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661627 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661647 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661691 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661712 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661732 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661753 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661772 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661792 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661811 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661832 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661853 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661873 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661921 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.661943 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662009 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662036 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662064 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662086 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662108 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662129 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662148 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662168 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662190 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662208 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662227 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662249 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662270 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662291 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662311 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662330 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662354 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662378 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662400 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662419 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662442 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662462 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662483 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662502 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662522 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662544 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662566 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662587 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662607 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662626 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662644 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662666 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662688 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662706 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662728 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662746 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662766 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662787 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662806 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662828 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662848 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662867 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662919 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662940 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662962 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.662980 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663000 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663023 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663054 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663074 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663094 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663112 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663130 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663149 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663166 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663190 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663209 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663234 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663254 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663276 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663297 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663317 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663334 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663353 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663377 4845 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663395 4845 reconstruct.go:97] "Volume reconstruction finished" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.663408 4845 reconciler.go:26] "Reconciler: start to sync state" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.666745 4845 manager.go:324] Recovery completed Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.678137 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.681608 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.681649 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.681663 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.684271 4845 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.684300 4845 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.684329 4845 state_mem.go:36] "Initialized new in-memory state store" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.702899 4845 policy_none.go:49] "None policy: Start" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.704406 4845 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.704443 4845 state_mem.go:35] "Initializing new in-memory state store" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.706317 4845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.711256 4845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.711308 4845 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.711345 4845 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.711408 4845 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 10:31:59 crc kubenswrapper[4845]: W0202 10:31:59.714022 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.714179 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.728744 4845 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.764528 4845 manager.go:334] "Starting Device Plugin manager" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.765004 4845 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.765051 4845 server.go:79] "Starting device plugin registration server" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.765975 4845 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.766038 4845 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.766289 4845 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.766549 4845 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.766585 4845 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.777609 4845 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.812060 4845 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.812227 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.813691 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.813773 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.813798 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.814120 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.814485 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.815099 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.818632 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.818671 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.818700 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.819827 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.819830 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.820019 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.820070 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.820140 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.820164 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821107 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821140 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821148 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821292 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821342 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821355 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821566 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821691 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.821733 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.822735 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.822786 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.822803 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.823123 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.823157 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.823175 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.823392 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.823687 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.823796 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.824697 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.824770 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.824799 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.825035 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.825071 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.825089 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.825213 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.825277 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.826656 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.826701 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.826719 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.832674 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="400ms" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.865571 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.865636 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.865676 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.865710 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.865871 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.865966 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866026 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866102 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866150 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866188 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866237 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866263 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866299 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866323 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.866358 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.867948 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.870087 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.870131 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.870150 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.870189 4845 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:31:59 crc kubenswrapper[4845]: E0202 10:31:59.870635 4845 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.967934 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968287 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968510 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968679 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968873 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968623 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968400 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968794 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968139 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.968954 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969071 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969251 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969286 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969319 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969331 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969353 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969380 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969408 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969386 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969413 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969458 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969466 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969510 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969488 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969490 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969580 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969618 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969633 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969668 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:31:59 crc kubenswrapper[4845]: I0202 10:31:59.969685 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.071026 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.073391 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.073670 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.073865 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.074080 4845 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:32:00 crc kubenswrapper[4845]: E0202 10:32:00.075218 4845 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.162462 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.175571 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.202438 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.229756 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 10:32:00 crc kubenswrapper[4845]: E0202 10:32:00.233933 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="800ms" Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.237563 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-73c9a9f0b2b452aab98efd95c7ca2d32f9daab3ea016c7b91f0a9f15ea54e21d WatchSource:0}: Error finding container 73c9a9f0b2b452aab98efd95c7ca2d32f9daab3ea016c7b91f0a9f15ea54e21d: Status 404 returned error can't find the container with id 73c9a9f0b2b452aab98efd95c7ca2d32f9daab3ea016c7b91f0a9f15ea54e21d Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.237682 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.244521 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-979577b20829b0ebe0549162765f5b2ce2677fa842bd6af8dee6ff0913c51afb WatchSource:0}: Error finding container 979577b20829b0ebe0549162765f5b2ce2677fa842bd6af8dee6ff0913c51afb: Status 404 returned error can't find the container with id 979577b20829b0ebe0549162765f5b2ce2677fa842bd6af8dee6ff0913c51afb Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.253971 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8ba7686ff8429e7de1af9d494078bbd5f7cf03a9e39eb4e971c7672025f08a20 WatchSource:0}: Error finding container 8ba7686ff8429e7de1af9d494078bbd5f7cf03a9e39eb4e971c7672025f08a20: Status 404 returned error can't find the container with id 8ba7686ff8429e7de1af9d494078bbd5f7cf03a9e39eb4e971c7672025f08a20 Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.255521 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-48242bb7d1e237c901773838d3968989bd23340651806a5a4bd94e70ce5eb3b9 WatchSource:0}: Error finding container 48242bb7d1e237c901773838d3968989bd23340651806a5a4bd94e70ce5eb3b9: Status 404 returned error can't find the container with id 48242bb7d1e237c901773838d3968989bd23340651806a5a4bd94e70ce5eb3b9 Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.264237 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-aedb1aab9603712abe233d308c0e51e40d2fb6e3bffd516b40b4ba182d1bb429 WatchSource:0}: Error finding container aedb1aab9603712abe233d308c0e51e40d2fb6e3bffd516b40b4ba182d1bb429: Status 404 returned error can't find the container with id aedb1aab9603712abe233d308c0e51e40d2fb6e3bffd516b40b4ba182d1bb429 Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.476118 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.478631 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.478706 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.478726 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.478776 4845 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:32:00 crc kubenswrapper[4845]: E0202 10:32:00.479593 4845 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.528019 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:00 crc kubenswrapper[4845]: E0202 10:32:00.528132 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.621072 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:00 crc kubenswrapper[4845]: E0202 10:32:00.621264 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.627193 4845 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.629055 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:51:55.856985181 +0000 UTC Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.719915 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aedb1aab9603712abe233d308c0e51e40d2fb6e3bffd516b40b4ba182d1bb429"} Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.721178 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"48242bb7d1e237c901773838d3968989bd23340651806a5a4bd94e70ce5eb3b9"} Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.722520 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8ba7686ff8429e7de1af9d494078bbd5f7cf03a9e39eb4e971c7672025f08a20"} Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.724751 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"979577b20829b0ebe0549162765f5b2ce2677fa842bd6af8dee6ff0913c51afb"} Feb 02 10:32:00 crc kubenswrapper[4845]: I0202 10:32:00.726590 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73c9a9f0b2b452aab98efd95c7ca2d32f9daab3ea016c7b91f0a9f15ea54e21d"} Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.736771 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:00 crc kubenswrapper[4845]: E0202 10:32:00.736877 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:00 crc kubenswrapper[4845]: W0202 10:32:00.908817 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:00 crc kubenswrapper[4845]: E0202 10:32:00.908966 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:01 crc kubenswrapper[4845]: E0202 10:32:01.035127 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="1.6s" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.280378 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.282190 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.282259 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.282279 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.282341 4845 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:32:01 crc kubenswrapper[4845]: E0202 10:32:01.283090 4845 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.614765 4845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:32:01 crc kubenswrapper[4845]: E0202 10:32:01.616787 4845 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.627556 4845 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.629780 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:22:58.267556801 +0000 UTC Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.732678 4845 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850" exitCode=0 Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.732780 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850"} Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.732849 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.734316 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.734390 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.734413 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.735513 4845 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676" exitCode=0 Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.735587 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676"} Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.735625 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.737064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.737124 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.737149 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.738347 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.742921 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.742999 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.743028 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.743315 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.743195 4845 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb" exitCode=0 Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.743830 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb"} Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.744518 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.744552 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.744572 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.749168 4845 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20" exitCode=0 Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.749360 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.749430 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20"} Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.750719 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.750753 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.750771 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.755251 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1"} Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.755345 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce"} Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.755382 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d"} Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.755408 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf"} Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.755585 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.757181 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.757256 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:01 crc kubenswrapper[4845]: I0202 10:32:01.757284 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.628075 4845 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.630412 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:57:46.726250234 +0000 UTC Feb 02 10:32:02 crc kubenswrapper[4845]: E0202 10:32:02.636085 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="3.2s" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.763253 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.763316 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.763328 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.763337 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.767295 4845 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1" exitCode=0 Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.767394 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.767485 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.768679 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.768728 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.768739 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.770455 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.771311 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.772582 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.772637 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.772653 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.775428 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.775512 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.775579 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.775595 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56"} Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.775609 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.776909 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.776948 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.776960 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.777318 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.777346 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.777355 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:02 crc kubenswrapper[4845]: W0202 10:32:02.792471 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:02 crc kubenswrapper[4845]: E0202 10:32:02.792571 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:02 crc kubenswrapper[4845]: W0202 10:32:02.812319 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:02 crc kubenswrapper[4845]: E0202 10:32:02.812437 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.884040 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.885405 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.885442 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.885454 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:02 crc kubenswrapper[4845]: I0202 10:32:02.885479 4845 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:32:02 crc kubenswrapper[4845]: E0202 10:32:02.886402 4845 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Feb 02 10:32:02 crc kubenswrapper[4845]: W0202 10:32:02.937813 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:02 crc kubenswrapper[4845]: E0202 10:32:02.937930 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.001283 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:32:03 crc kubenswrapper[4845]: W0202 10:32:03.122284 4845 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Feb 02 10:32:03 crc kubenswrapper[4845]: E0202 10:32:03.122413 4845 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.631092 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:22:33.666920729 +0000 UTC Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.784194 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b"} Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.784410 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.785807 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.785864 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.785874 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.788059 4845 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3" exitCode=0 Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.788196 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.788203 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.788702 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3"} Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.788852 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.789519 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.789554 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.789569 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.789838 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.789920 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.789940 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.790562 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.790599 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:03 crc kubenswrapper[4845]: I0202 10:32:03.790609 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.178523 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.178820 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.180982 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.181139 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.181164 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.631586 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:10:28.851185349 +0000 UTC Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.798288 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd"} Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.798358 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209"} Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.798386 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc"} Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.798434 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.798540 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.798457 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.800015 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.800091 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.800107 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.800233 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.800266 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:04 crc kubenswrapper[4845]: I0202 10:32:04.800277 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:05 crc kubenswrapper[4845]: I0202 10:32:05.631950 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:06:40.870970435 +0000 UTC Feb 02 10:32:05 crc kubenswrapper[4845]: I0202 10:32:05.745953 4845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:32:05 crc kubenswrapper[4845]: I0202 10:32:05.807030 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0"} Feb 02 10:32:05 crc kubenswrapper[4845]: I0202 10:32:05.807101 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267"} Feb 02 10:32:05 crc kubenswrapper[4845]: I0202 10:32:05.807314 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:05 crc kubenswrapper[4845]: I0202 10:32:05.809053 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:05 crc kubenswrapper[4845]: I0202 10:32:05.809123 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:05 crc kubenswrapper[4845]: I0202 10:32:05.809142 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.086540 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.088258 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.088320 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.088343 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.088383 4845 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.134332 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.134541 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.134612 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.136768 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.136814 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.136837 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.580182 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.596635 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.596937 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.599137 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.599213 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.599229 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.632596 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:03:23.952659748 +0000 UTC Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.639881 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.810976 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.811055 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.811119 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.812735 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.812774 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.812791 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.813053 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.813144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:06 crc kubenswrapper[4845]: I0202 10:32:06.813167 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.333552 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.333843 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.335515 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.335586 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.335611 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.633780 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:56:05.700791281 +0000 UTC Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.813630 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.815006 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.815103 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.815124 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.919633 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.919943 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.921420 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.921495 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:07 crc kubenswrapper[4845]: I0202 10:32:07.921516 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:08 crc kubenswrapper[4845]: I0202 10:32:08.634586 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:30:52.36641614 +0000 UTC Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.230588 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.230950 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.232738 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.232807 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.232832 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.237586 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.597670 4845 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.597770 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.635149 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:33:38.290489499 +0000 UTC Feb 02 10:32:09 crc kubenswrapper[4845]: E0202 10:32:09.777856 4845 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.818292 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.819960 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.820021 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:09 crc kubenswrapper[4845]: I0202 10:32:09.820041 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:10 crc kubenswrapper[4845]: I0202 10:32:10.635779 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:06:41.749009381 +0000 UTC Feb 02 10:32:11 crc kubenswrapper[4845]: I0202 10:32:11.636847 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:59:04.427357989 +0000 UTC Feb 02 10:32:12 crc kubenswrapper[4845]: I0202 10:32:12.637680 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:03:39.003811264 +0000 UTC Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.494217 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.494443 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.495668 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.495714 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.495728 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.626924 4845 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.638392 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:24:43.774727274 +0000 UTC Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.831446 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.834176 4845 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b" exitCode=255 Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.834237 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b"} Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.834434 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.835445 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.835508 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.835533 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:13 crc kubenswrapper[4845]: I0202 10:32:13.836453 4845 scope.go:117] "RemoveContainer" containerID="74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.185602 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.185822 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.187375 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.187436 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.187457 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.519519 4845 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.519584 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.527403 4845 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.527471 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.639383 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:54:57.328252696 +0000 UTC Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.840559 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.843107 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5"} Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.843283 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.844511 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.844575 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:14 crc kubenswrapper[4845]: I0202 10:32:14.844595 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:15 crc kubenswrapper[4845]: I0202 10:32:15.639995 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:54:12.263585485 +0000 UTC Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.143871 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.144165 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.144274 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.145746 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.145792 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.145804 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.156207 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.640537 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:19:59.787855823 +0000 UTC Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.850857 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.852564 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.852658 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:16 crc kubenswrapper[4845]: I0202 10:32:16.852692 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:17 crc kubenswrapper[4845]: I0202 10:32:17.680008 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:30:05.292821977 +0000 UTC Feb 02 10:32:17 crc kubenswrapper[4845]: I0202 10:32:17.853871 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:17 crc kubenswrapper[4845]: I0202 10:32:17.855359 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:17 crc kubenswrapper[4845]: I0202 10:32:17.855627 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:17 crc kubenswrapper[4845]: I0202 10:32:17.855655 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:18 crc kubenswrapper[4845]: I0202 10:32:18.680578 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:15:07.935169359 +0000 UTC Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.520302 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.522688 4845 trace.go:236] Trace[685841295]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 10:32:06.910) (total time: 12612ms): Feb 02 10:32:19 crc kubenswrapper[4845]: Trace[685841295]: ---"Objects listed" error: 12612ms (10:32:19.522) Feb 02 10:32:19 crc kubenswrapper[4845]: Trace[685841295]: [12.612426617s] [12.612426617s] END Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.522709 4845 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.522839 4845 trace.go:236] Trace[792374462]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 10:32:08.602) (total time: 10920ms): Feb 02 10:32:19 crc kubenswrapper[4845]: Trace[792374462]: ---"Objects listed" error: 10920ms (10:32:19.522) Feb 02 10:32:19 crc kubenswrapper[4845]: Trace[792374462]: [10.920455954s] [10.920455954s] END Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.522912 4845 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.523358 4845 trace.go:236] Trace[1986920572]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 10:32:08.432) (total time: 11090ms): Feb 02 10:32:19 crc kubenswrapper[4845]: Trace[1986920572]: ---"Objects listed" error: 11090ms (10:32:19.523) Feb 02 10:32:19 crc kubenswrapper[4845]: Trace[1986920572]: [11.090824877s] [11.090824877s] END Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.523378 4845 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.525052 4845 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.525203 4845 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.527013 4845 trace.go:236] Trace[1290163049]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 10:32:08.771) (total time: 10755ms): Feb 02 10:32:19 crc kubenswrapper[4845]: Trace[1290163049]: ---"Objects listed" error: 10755ms (10:32:19.526) Feb 02 10:32:19 crc kubenswrapper[4845]: Trace[1290163049]: [10.755385096s] [10.755385096s] END Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.527059 4845 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.534607 4845 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.560931 4845 csr.go:261] certificate signing request csr-npg7j is approved, waiting to be issued Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.576042 4845 csr.go:257] certificate signing request csr-npg7j is issued Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.597690 4845 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.597794 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.620570 4845 apiserver.go:52] "Watching apiserver" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.624133 4845 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.624436 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.624919 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.624968 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.625022 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.625169 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.625243 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.625829 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.625860 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.625929 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.625951 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.626379 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.627643 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.628054 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.628186 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.628287 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.628721 4845 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.629000 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.629159 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.629761 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.630188 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.652326 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.669534 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.681558 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:50:22.632698342 +0000 UTC Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.682078 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.692918 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.704029 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-rzb6b"] Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.704347 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rzb6b" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.704943 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.706647 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.709623 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.709668 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.721449 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726488 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726529 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726551 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726574 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726593 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726614 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726635 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726655 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726676 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726696 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726717 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726737 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726759 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726782 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726804 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726825 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726845 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726870 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726914 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726937 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726958 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726979 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.726987 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727000 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727061 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727081 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727099 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727118 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727133 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727150 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727169 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727189 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727206 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727222 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727240 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727255 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727277 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727294 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727315 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727333 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727349 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727365 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727386 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727407 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727428 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727448 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727469 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727492 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727517 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727547 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727568 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727591 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727611 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727636 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727653 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727669 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727692 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727715 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727765 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727782 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727828 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727843 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727859 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727874 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727915 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727932 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727951 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727967 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727983 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728000 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728019 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728037 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728053 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728071 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728113 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728130 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728146 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728161 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728179 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728195 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728213 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728231 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728246 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728262 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728280 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728297 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728312 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728329 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728347 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728365 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728381 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728399 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728414 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728431 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728447 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728463 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728482 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728498 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728513 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728530 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728544 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728561 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728611 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728630 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728644 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728661 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728677 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728694 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728716 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728733 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728752 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728770 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728787 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728803 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728820 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728836 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728851 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728870 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728914 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728940 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728965 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728986 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729009 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729034 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729061 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729084 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729106 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729131 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729155 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729180 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729203 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729229 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729251 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729274 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729295 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729322 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729345 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729369 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729396 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729421 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729445 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729470 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729494 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729518 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729542 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729568 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729589 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729612 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729636 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729659 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729681 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729709 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729736 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729761 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729783 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729808 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729831 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729854 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729874 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730017 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730071 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730094 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730117 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730148 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730171 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730194 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730216 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730243 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730267 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730293 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730316 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730343 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730369 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730393 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730413 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730438 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730463 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730490 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730513 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730536 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730559 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730579 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730600 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730625 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730646 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730669 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730691 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730714 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730737 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730761 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730898 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730934 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730966 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730994 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731018 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731053 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731106 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731139 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731172 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731202 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731229 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731254 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731281 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731311 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731345 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731372 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731399 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731426 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731454 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731482 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731553 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.727290 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728372 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728709 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.728641 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.732964 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729025 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729041 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729248 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729296 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729351 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729659 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.729676 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730222 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730439 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730448 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730531 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730549 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.730819 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731288 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731434 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731521 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731535 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.731754 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.732188 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.733096 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.732430 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.732557 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.732835 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.733234 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.732912 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.732974 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.733005 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.733054 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.733554 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.733625 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.733936 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.734081 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.734108 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.734221 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.735315 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.735597 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.735817 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.735839 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.735970 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.736038 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.736066 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.736404 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.736516 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.736874 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.736949 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.736997 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.737135 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.737269 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.737298 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.738277 4845 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.738964 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.739380 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.744351 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.735835 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.744563 4845 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.746925 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:20.246842876 +0000 UTC m=+21.338244426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.748317 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.748424 4845 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.748513 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:20.248481984 +0000 UTC m=+21.339883474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.755059 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:32:20.255019964 +0000 UTC m=+21.346421494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.759089 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.759122 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.759142 4845 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.759218 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:20.259194206 +0000 UTC m=+21.350595746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.768364 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.779223 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.779270 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.779290 4845 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:19 crc kubenswrapper[4845]: E0202 10:32:19.779372 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:20.279346834 +0000 UTC m=+21.370748294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.784083 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.784279 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.785104 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.785341 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.785361 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.786964 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.787223 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.787302 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.787340 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.787600 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.787814 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.787977 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.787995 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.788158 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.788249 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.788366 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.788616 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.788689 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.789029 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.789217 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.789726 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.794447 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.794655 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.794668 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.794839 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.795042 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.795213 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.795308 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.795382 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.795401 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.795587 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.795861 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.796098 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.796188 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.799108 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.806196 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.806211 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.806383 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.808472 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.808752 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.808938 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.810351 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.810254 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.812316 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.812779 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.813035 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.813075 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.813238 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.813383 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.813427 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.813534 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.813690 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.814013 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.814102 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.814672 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.814881 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.815235 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.815454 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.816560 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.818408 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.819696 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.819727 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.820494 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.820527 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.820599 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.820709 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.820789 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.821092 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.821125 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.821147 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.821168 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.821462 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.821467 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.822347 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.824316 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.825077 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.825275 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.825331 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.825929 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.825969 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.826075 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.825207 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.826797 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.827140 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.827213 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.827822 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.828293 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.831408 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.833409 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.833945 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b94e8620-d850-4036-b311-42b2a6369c73-hosts-file\") pod \"node-resolver-rzb6b\" (UID: \"b94e8620-d850-4036-b311-42b2a6369c73\") " pod="openshift-dns/node-resolver-rzb6b" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.833991 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5szs4\" (UniqueName: \"kubernetes.io/projected/b94e8620-d850-4036-b311-42b2a6369c73-kube-api-access-5szs4\") pod \"node-resolver-rzb6b\" (UID: \"b94e8620-d850-4036-b311-42b2a6369c73\") " pod="openshift-dns/node-resolver-rzb6b" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834028 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834096 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834163 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834184 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834205 4845 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834220 4845 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834233 4845 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834247 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834261 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834274 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834286 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834298 4845 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834293 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834310 4845 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834366 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834386 4845 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834426 4845 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834463 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834510 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834542 4845 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834560 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834577 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834595 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834627 4845 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834659 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834691 4845 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834708 4845 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834743 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834776 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834809 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834842 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834859 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834966 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.834991 4845 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835009 4845 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835027 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835061 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835100 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835157 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835191 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835226 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835244 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835261 4845 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835308 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835327 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835344 4845 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835379 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835399 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835445 4845 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835477 4845 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835509 4845 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835527 4845 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835560 4845 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835593 4845 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835625 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835657 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835704 4845 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835880 4845 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835920 4845 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835939 4845 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835951 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.835971 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.836766 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.837006 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.837215 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.837277 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.837462 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.837490 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.837789 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.837939 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838191 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838281 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838300 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838340 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838384 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838445 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838479 4845 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838500 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838548 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838593 4845 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838638 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838671 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838690 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838739 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838772 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838832 4845 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838864 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838883 4845 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.838957 4845 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839004 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839051 4845 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839085 4845 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839103 4845 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839163 4845 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839196 4845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839242 4845 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839278 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839297 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839343 4845 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839376 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839409 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839453 4845 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839484 4845 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839502 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839534 4845 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839564 4845 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839596 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839654 4845 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839687 4845 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839705 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839723 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839741 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839757 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839774 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839790 4845 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839809 4845 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839825 4845 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839842 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839859 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839876 4845 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839915 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839932 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839949 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839966 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.839983 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840000 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840016 4845 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840033 4845 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840050 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840067 4845 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840084 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840104 4845 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840122 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840138 4845 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840155 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840173 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840191 4845 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840207 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840225 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840242 4845 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840261 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840277 4845 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840294 4845 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840312 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840329 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840346 4845 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840363 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840380 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840396 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840413 4845 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840430 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840446 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.840463 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.841139 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.841197 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.841410 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.842332 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.842807 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.843196 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.843456 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.843643 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.843688 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.843939 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.843973 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.844999 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.845097 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.845452 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.845552 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.845707 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.846526 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.848009 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.848376 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.851378 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.851546 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.851620 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.852799 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.853131 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.853243 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.853626 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.853731 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.853814 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.854544 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.854836 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.855037 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.855199 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.856377 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.856425 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.856571 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.856876 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.857081 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.857116 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.857371 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.857477 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.858071 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.858238 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.859092 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.861960 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.876099 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.886685 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.889540 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.896438 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.898570 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.898712 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.915456 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.924026 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.935202 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941291 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b94e8620-d850-4036-b311-42b2a6369c73-hosts-file\") pod \"node-resolver-rzb6b\" (UID: \"b94e8620-d850-4036-b311-42b2a6369c73\") " pod="openshift-dns/node-resolver-rzb6b" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941336 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5szs4\" (UniqueName: \"kubernetes.io/projected/b94e8620-d850-4036-b311-42b2a6369c73-kube-api-access-5szs4\") pod \"node-resolver-rzb6b\" (UID: \"b94e8620-d850-4036-b311-42b2a6369c73\") " pod="openshift-dns/node-resolver-rzb6b" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941418 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941433 4845 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941446 4845 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941458 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941469 4845 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941481 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941493 4845 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941505 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941517 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941529 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941541 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941552 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941564 4845 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941576 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941588 4845 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941601 4845 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941612 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941624 4845 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941636 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941648 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941661 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941674 4845 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941686 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941697 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941709 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941720 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941732 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941744 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941757 4845 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941768 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941780 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941791 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941802 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941814 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941825 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941835 4845 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941850 4845 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941860 4845 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941872 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941901 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941913 4845 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941925 4845 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941939 4845 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941951 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941963 4845 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941975 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941989 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942005 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942022 4845 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942034 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942046 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942057 4845 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942068 4845 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942085 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942097 4845 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942110 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.942122 4845 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.941472 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b94e8620-d850-4036-b311-42b2a6369c73-hosts-file\") pod \"node-resolver-rzb6b\" (UID: \"b94e8620-d850-4036-b311-42b2a6369c73\") " pod="openshift-dns/node-resolver-rzb6b" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.945003 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.953924 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.961952 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.969697 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.973086 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.974714 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5szs4\" (UniqueName: \"kubernetes.io/projected/b94e8620-d850-4036-b311-42b2a6369c73-kube-api-access-5szs4\") pod \"node-resolver-rzb6b\" (UID: \"b94e8620-d850-4036-b311-42b2a6369c73\") " pod="openshift-dns/node-resolver-rzb6b" Feb 02 10:32:19 crc kubenswrapper[4845]: I0202 10:32:19.986184 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.021821 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.096038 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rzb6b" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.347511 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.347589 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.347619 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.347643 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.347662 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.347748 4845 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.347802 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:21.347787396 +0000 UTC m=+22.439188846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.347855 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:32:21.347848737 +0000 UTC m=+22.439250187 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.347944 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.347956 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.347968 4845 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.347990 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:21.347984411 +0000 UTC m=+22.439385861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.348036 4845 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.348056 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:21.348050983 +0000 UTC m=+22.439452433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.348094 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.348104 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.348111 4845 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.348131 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:21.348126275 +0000 UTC m=+22.439527725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.578225 4845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 10:27:19 +0000 UTC, rotation deadline is 2026-12-07 15:58:08.103272923 +0000 UTC Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.578305 4845 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7397h25m47.524970169s for next certificate rotation Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.682146 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:48:43.166575541 +0000 UTC Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.868294 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.868371 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.868388 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0aee212c36ea4834859a48b48df56eef6b94d9bbeee9e5f558f84bfa2387796c"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.869980 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.870472 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.872079 4845 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5" exitCode=255 Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.872155 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.872271 4845 scope.go:117] "RemoveContainer" containerID="74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.873624 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.873702 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"25e9942ce260cd8fefd4586cbb1b3fc34396b829d3f3478a317c741d0d9f8437"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.876136 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rzb6b" event={"ID":"b94e8620-d850-4036-b311-42b2a6369c73","Type":"ContainerStarted","Data":"56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.876181 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rzb6b" event={"ID":"b94e8620-d850-4036-b311-42b2a6369c73","Type":"ContainerStarted","Data":"23fef1dedd5b078b0e6fbaf6ec0c05fc50fc69097e834f39c8e5e754b2f46515"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.877462 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"30aa69b7e93840f6f33b7d4a7200981d9e044a7123cb74614023a21365973090"} Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.895567 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.920051 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.944421 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.962812 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.977132 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.986531 4845 scope.go:117] "RemoveContainer" containerID="5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.986588 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:32:20 crc kubenswrapper[4845]: E0202 10:32:20.986792 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:32:20 crc kubenswrapper[4845]: I0202 10:32:20.993721 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.007714 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.018983 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.030067 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.043117 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:13Z\\\",\\\"message\\\":\\\"W0202 10:32:02.975508 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 10:32:02.975920 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770028322 cert, and key in /tmp/serving-cert-231171005/serving-signer.crt, /tmp/serving-cert-231171005/serving-signer.key\\\\nI0202 10:32:03.164665 1 observer_polling.go:159] Starting file observer\\\\nW0202 10:32:03.171077 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 10:32:03.171332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:03.176019 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231171005/tls.crt::/tmp/serving-cert-231171005/tls.key\\\\\\\"\\\\nF0202 10:32:13.433273 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.056812 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.069929 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.081352 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.096600 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.109261 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.356121 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.356193 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.356217 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.356212 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-thbz4"] Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356333 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356353 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356363 4845 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356370 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.356237 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356391 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:32:23.356343942 +0000 UTC m=+24.447745392 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356407 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356455 4845 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356471 4845 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356444 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:23.356433854 +0000 UTC m=+24.447835304 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356590 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:23.356562478 +0000 UTC m=+24.447963938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.356572 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356651 4845 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356670 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:23.356660011 +0000 UTC m=+24.448061461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.356690 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:23.356681941 +0000 UTC m=+24.448083391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.357042 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.357779 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kzwst"] Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.358309 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2wnn9"] Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.358468 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.359016 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.360441 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.361038 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.361161 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.362871 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.364424 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.364477 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.364552 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.364556 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.364941 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.365717 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.366237 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.371435 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.387359 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:13Z\\\",\\\"message\\\":\\\"W0202 10:32:02.975508 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 10:32:02.975920 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770028322 cert, and key in /tmp/serving-cert-231171005/serving-signer.crt, /tmp/serving-cert-231171005/serving-signer.key\\\\nI0202 10:32:03.164665 1 observer_polling.go:159] Starting file observer\\\\nW0202 10:32:03.171077 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 10:32:03.171332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:03.176019 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231171005/tls.crt::/tmp/serving-cert-231171005/tls.key\\\\\\\"\\\\nF0202 10:32:13.433273 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.408876 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.440847 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457148 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-tuning-conf-dir\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457206 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-cni-bin\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457228 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-system-cni-dir\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457335 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebf2f253-531f-4835-84c1-928680352f7f-proxy-tls\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457441 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-cni-binary-copy\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457469 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-cni-multus\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457507 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-multus-certs\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457537 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-kubelet\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457594 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-conf-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457643 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01f1334f-a21c-4487-a1d6-dbecf7017c59-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457686 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-cni-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457709 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-daemon-config\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457764 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebf2f253-531f-4835-84c1-928680352f7f-mcd-auth-proxy-config\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457792 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-cnibin\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457814 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebf2f253-531f-4835-84c1-928680352f7f-rootfs\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457834 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9h9c\" (UniqueName: \"kubernetes.io/projected/01f1334f-a21c-4487-a1d6-dbecf7017c59-kube-api-access-k9h9c\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457896 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-k8s-cni-cncf-io\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457928 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/01f1334f-a21c-4487-a1d6-dbecf7017c59-cni-binary-copy\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.457963 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-socket-dir-parent\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458012 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-hostroot\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458028 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-etc-kubernetes\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458058 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-cnibin\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458085 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-os-release\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458103 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8c6\" (UniqueName: \"kubernetes.io/projected/ebf2f253-531f-4835-84c1-928680352f7f-kube-api-access-jc8c6\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458120 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-os-release\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458148 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-system-cni-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458167 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-netns\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.458183 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptl6\" (UniqueName: \"kubernetes.io/projected/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-kube-api-access-4ptl6\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.459981 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.473403 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.487181 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.501261 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.514087 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.524785 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.535999 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.550332 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.559741 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-tuning-conf-dir\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560089 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-system-cni-dir\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560199 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-cni-bin\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560324 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-cni-binary-copy\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560444 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-cni-multus\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560557 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-multus-certs\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560693 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebf2f253-531f-4835-84c1-928680352f7f-proxy-tls\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560831 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-kubelet\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560972 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-conf-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561100 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01f1334f-a21c-4487-a1d6-dbecf7017c59-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561217 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-cni-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561320 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-daemon-config\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561437 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-cnibin\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561554 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebf2f253-531f-4835-84c1-928680352f7f-rootfs\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561694 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebf2f253-531f-4835-84c1-928680352f7f-mcd-auth-proxy-config\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561817 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-k8s-cni-cncf-io\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561970 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/01f1334f-a21c-4487-a1d6-dbecf7017c59-cni-binary-copy\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562110 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9h9c\" (UniqueName: \"kubernetes.io/projected/01f1334f-a21c-4487-a1d6-dbecf7017c59-kube-api-access-k9h9c\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562745 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-hostroot\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562791 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-etc-kubernetes\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562823 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-socket-dir-parent\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562843 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-cnibin\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562842 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-hostroot\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562878 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-os-release\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561635 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ebf2f253-531f-4835-84c1-928680352f7f-rootfs\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562913 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8c6\" (UniqueName: \"kubernetes.io/projected/ebf2f253-531f-4835-84c1-928680352f7f-kube-api-access-jc8c6\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561019 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-conf-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562934 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-os-release\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562956 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-netns\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562973 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptl6\" (UniqueName: \"kubernetes.io/projected/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-kube-api-access-4ptl6\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562969 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-cnibin\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561438 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-cni-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.562995 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-system-cni-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.563025 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-etc-kubernetes\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560480 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-cni-multus\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560965 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-kubelet\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.563097 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-socket-dir-parent\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561483 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-cnibin\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560616 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-multus-certs\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560576 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-tuning-conf-dir\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.563355 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-os-release\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.561903 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-k8s-cni-cncf-io\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.563397 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-run-netns\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560240 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-host-var-lib-cni-bin\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.563438 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-os-release\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.560206 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/01f1334f-a21c-4487-a1d6-dbecf7017c59-system-cni-dir\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.563508 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-system-cni-dir\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.565146 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.586241 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.598690 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.613328 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.623679 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.637062 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-cni-binary-copy\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.637072 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-multus-daemon-config\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.637755 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebf2f253-531f-4835-84c1-928680352f7f-mcd-auth-proxy-config\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.638279 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01f1334f-a21c-4487-a1d6-dbecf7017c59-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.638311 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/01f1334f-a21c-4487-a1d6-dbecf7017c59-cni-binary-copy\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.640238 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebf2f253-531f-4835-84c1-928680352f7f-proxy-tls\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.642654 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9h9c\" (UniqueName: \"kubernetes.io/projected/01f1334f-a21c-4487-a1d6-dbecf7017c59-kube-api-access-k9h9c\") pod \"multus-additional-cni-plugins-thbz4\" (UID: \"01f1334f-a21c-4487-a1d6-dbecf7017c59\") " pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.643024 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:13Z\\\",\\\"message\\\":\\\"W0202 10:32:02.975508 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 10:32:02.975920 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770028322 cert, and key in /tmp/serving-cert-231171005/serving-signer.crt, /tmp/serving-cert-231171005/serving-signer.key\\\\nI0202 10:32:03.164665 1 observer_polling.go:159] Starting file observer\\\\nW0202 10:32:03.171077 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 10:32:03.171332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:03.176019 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231171005/tls.crt::/tmp/serving-cert-231171005/tls.key\\\\\\\"\\\\nF0202 10:32:13.433273 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.643321 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8c6\" (UniqueName: \"kubernetes.io/projected/ebf2f253-531f-4835-84c1-928680352f7f-kube-api-access-jc8c6\") pod \"machine-config-daemon-2wnn9\" (UID: \"ebf2f253-531f-4835-84c1-928680352f7f\") " pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.644572 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptl6\" (UniqueName: \"kubernetes.io/projected/310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3-kube-api-access-4ptl6\") pod \"multus-kzwst\" (UID: \"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\") " pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.660615 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.671313 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-thbz4" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.682093 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.682126 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kzwst" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.683252 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:19:36.737907839 +0000 UTC Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.686419 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:32:21 crc kubenswrapper[4845]: W0202 10:32:21.697092 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod310f06ec_b9c5_40c9_aeb9_a6e4ef5304c3.slice/crio-59b1c33b87539606b5c541081e96d98c017ec607104265a3ef4908039e16e5ed WatchSource:0}: Error finding container 59b1c33b87539606b5c541081e96d98c017ec607104265a3ef4908039e16e5ed: Status 404 returned error can't find the container with id 59b1c33b87539606b5c541081e96d98c017ec607104265a3ef4908039e16e5ed Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.700248 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: W0202 10:32:21.705600 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf2f253_531f_4835_84c1_928680352f7f.slice/crio-bc4a27138c54b706fb0fc9998b70b52b659090ec0b2c3097a533a45b406bbb7b WatchSource:0}: Error finding container bc4a27138c54b706fb0fc9998b70b52b659090ec0b2c3097a533a45b406bbb7b: Status 404 returned error can't find the container with id bc4a27138c54b706fb0fc9998b70b52b659090ec0b2c3097a533a45b406bbb7b Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.711900 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.712063 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.712527 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.712610 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.712687 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.712756 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.720940 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.721670 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.723763 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.724740 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.726321 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.728180 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.729037 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.730346 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.731281 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.732561 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.733293 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.734311 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.735689 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.736785 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.741574 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.742402 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.744549 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.745173 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.746061 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.748623 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.749386 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.751281 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.751981 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.753260 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.753704 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.757013 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.757687 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.758183 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.759964 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.760504 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.762809 4845 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.762935 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.764762 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.765787 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.766292 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.767777 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.768504 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.769584 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.770276 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.771426 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.771943 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.775569 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.776706 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.777348 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.777827 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.779138 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.781146 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.781929 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.782413 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.783312 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.783769 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.784940 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.785703 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.786194 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.787708 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sh5vd"] Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.788603 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.791945 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.792245 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.792374 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.792545 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.802382 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.806351 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.806651 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.826403 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.840676 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.853131 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.866982 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.867335 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovn-node-metrics-cert\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.867449 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-netns\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.867552 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-ovn-kubernetes\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.867658 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-bin\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.867765 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-systemd-units\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.867972 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-systemd\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868089 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-node-log\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868183 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-config\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868275 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-kubelet\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868427 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-slash\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868523 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgfw\" (UniqueName: \"kubernetes.io/projected/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-kube-api-access-wdgfw\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868622 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-var-lib-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868704 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-log-socket\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868785 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.868878 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-env-overrides\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.869023 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-ovn\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.869129 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-script-lib\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.869221 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-etc-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.869311 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-netd\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.884839 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzwst" event={"ID":"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3","Type":"ContainerStarted","Data":"59b1c33b87539606b5c541081e96d98c017ec607104265a3ef4908039e16e5ed"} Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.885975 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" event={"ID":"01f1334f-a21c-4487-a1d6-dbecf7017c59","Type":"ContainerStarted","Data":"7026b81ced6a434941c75f5240aa0c7022340daff2657863bf36bd03319f3ebf"} Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.887616 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.889827 4845 scope.go:117] "RemoveContainer" containerID="5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5" Feb 02 10:32:21 crc kubenswrapper[4845]: E0202 10:32:21.890117 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.894940 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"bc4a27138c54b706fb0fc9998b70b52b659090ec0b2c3097a533a45b406bbb7b"} Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.896285 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.916081 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.929849 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74689dffec06ff54020fec1e21fdf780832e7ee03a7fed6c12395b985502870b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:13Z\\\",\\\"message\\\":\\\"W0202 10:32:02.975508 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 10:32:02.975920 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770028322 cert, and key in /tmp/serving-cert-231171005/serving-signer.crt, /tmp/serving-cert-231171005/serving-signer.key\\\\nI0202 10:32:03.164665 1 observer_polling.go:159] Starting file observer\\\\nW0202 10:32:03.171077 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 10:32:03.171332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:03.176019 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-231171005/tls.crt::/tmp/serving-cert-231171005/tls.key\\\\\\\"\\\\nF0202 10:32:13.433273 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.943642 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.963874 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970718 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovn-node-metrics-cert\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970764 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-netns\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970782 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-bin\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970800 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-ovn-kubernetes\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970817 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-systemd-units\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970829 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-systemd\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970845 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-config\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970860 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-kubelet\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970862 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-netns\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970878 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-node-log\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970912 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-slash\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970937 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgfw\" (UniqueName: \"kubernetes.io/projected/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-kube-api-access-wdgfw\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970960 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-var-lib-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.970981 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-log-socket\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971002 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-env-overrides\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971027 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971043 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-ovn\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971057 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-script-lib\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971072 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-etc-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971084 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-netd\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971106 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971163 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971197 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-bin\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971217 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-ovn-kubernetes\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971236 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-systemd-units\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971256 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-systemd\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971650 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-log-socket\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971731 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-kubelet\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971774 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-node-log\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971801 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-slash\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.971849 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-config\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.972116 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-var-lib-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.972238 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-env-overrides\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.972281 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-etc-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.972308 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-netd\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.972332 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-openvswitch\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.972354 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-ovn\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.972718 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-script-lib\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.975448 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovn-node-metrics-cert\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.977847 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.987833 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgfw\" (UniqueName: \"kubernetes.io/projected/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-kube-api-access-wdgfw\") pod \"ovnkube-node-sh5vd\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.988173 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:21 crc kubenswrapper[4845]: I0202 10:32:21.999721 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.011963 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.023043 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.034188 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.048053 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.059651 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.074292 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.093943 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.115739 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.122415 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.134918 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: W0202 10:32:22.144634 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b93b041_3f3f_47ba_a9d4_d09de1b326dc.slice/crio-000add16087c8ea520a073a5f979801fd8148c67d21762dc28e5704d78c31411 WatchSource:0}: Error finding container 000add16087c8ea520a073a5f979801fd8148c67d21762dc28e5704d78c31411: Status 404 returned error can't find the container with id 000add16087c8ea520a073a5f979801fd8148c67d21762dc28e5704d78c31411 Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.158992 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.189393 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.203179 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.225360 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.684049 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:40:47.321187711 +0000 UTC Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.904398 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b"} Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.906558 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690"} Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.906622 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428"} Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.908128 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzwst" event={"ID":"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3","Type":"ContainerStarted","Data":"2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a"} Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.910360 4845 generic.go:334] "Generic (PLEG): container finished" podID="01f1334f-a21c-4487-a1d6-dbecf7017c59" containerID="86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c" exitCode=0 Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.910410 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" event={"ID":"01f1334f-a21c-4487-a1d6-dbecf7017c59","Type":"ContainerDied","Data":"86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c"} Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.912614 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1" exitCode=0 Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.912657 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1"} Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.912685 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"000add16087c8ea520a073a5f979801fd8148c67d21762dc28e5704d78c31411"} Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.933463 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.950632 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.969897 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:22 crc kubenswrapper[4845]: I0202 10:32:22.992153 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.031051 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.057398 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.070846 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.087153 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.098526 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.112635 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.130704 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.143557 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.161291 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.175104 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.185352 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.198694 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.214598 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.232229 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.244767 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.269052 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.283502 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.296549 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.313266 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.328295 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.390346 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.390429 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.390454 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.390477 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.390505 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.390607 4845 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.390655 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:27.390641907 +0000 UTC m=+28.482043357 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.390986 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:32:27.390978337 +0000 UTC m=+28.482379787 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391021 4845 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391043 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:27.391037228 +0000 UTC m=+28.482438678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391096 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391106 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391115 4845 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391135 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:27.391129801 +0000 UTC m=+28.482531251 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391170 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391178 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391185 4845 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.391223 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:27.391216224 +0000 UTC m=+28.482617674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.518483 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.534158 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.537176 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.541237 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.551604 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.565672 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.576400 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.585680 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.597369 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.607950 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.622427 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.634845 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.651489 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.665629 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.682349 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.684839 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:00:59.631176905 +0000 UTC Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.699090 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xdtrh"] Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.699442 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.702510 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.702676 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.703338 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.703358 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.703699 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.711839 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.711857 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.711849 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.712006 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.712117 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:23 crc kubenswrapper[4845]: E0202 10:32:23.712180 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.713635 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.727567 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.738705 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.761902 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.775464 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.790011 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.793794 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-host\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.793987 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f24s\" (UniqueName: \"kubernetes.io/projected/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-kube-api-access-9f24s\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.794174 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-serviceca\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.806228 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.819054 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.833057 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.849870 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.893136 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.894566 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-host\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.894685 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f24s\" (UniqueName: \"kubernetes.io/projected/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-kube-api-access-9f24s\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.894681 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-host\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.894803 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-serviceca\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.895689 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-serviceca\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.920751 4845 generic.go:334] "Generic (PLEG): container finished" podID="01f1334f-a21c-4487-a1d6-dbecf7017c59" containerID="94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f" exitCode=0 Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.920848 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" event={"ID":"01f1334f-a21c-4487-a1d6-dbecf7017c59","Type":"ContainerDied","Data":"94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f"} Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.926326 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73"} Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.926356 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7"} Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.926369 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3"} Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.926380 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb"} Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.946811 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f24s\" (UniqueName: \"kubernetes.io/projected/86c7aea7-01dc-4f6d-ab41-94447a76fd6e-kube-api-access-9f24s\") pod \"node-ca-xdtrh\" (UID: \"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\") " pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.954338 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:23 crc kubenswrapper[4845]: I0202 10:32:23.995405 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.037261 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.072611 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.089442 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xdtrh" Feb 02 10:32:24 crc kubenswrapper[4845]: W0202 10:32:24.101279 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86c7aea7_01dc_4f6d_ab41_94447a76fd6e.slice/crio-f7b5c48677ea9e06fac08d757561709c77b36cab742fc1a2c9b0109414152ea9 WatchSource:0}: Error finding container f7b5c48677ea9e06fac08d757561709c77b36cab742fc1a2c9b0109414152ea9: Status 404 returned error can't find the container with id f7b5c48677ea9e06fac08d757561709c77b36cab742fc1a2c9b0109414152ea9 Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.112088 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.151155 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.193108 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.230235 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.270719 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.308122 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.350107 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.389077 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.428851 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.470953 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.513612 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.685021 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:00:45.327706892 +0000 UTC Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.933823 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658"} Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.933882 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397"} Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.935437 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xdtrh" event={"ID":"86c7aea7-01dc-4f6d-ab41-94447a76fd6e","Type":"ContainerStarted","Data":"520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af"} Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.935497 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xdtrh" event={"ID":"86c7aea7-01dc-4f6d-ab41-94447a76fd6e","Type":"ContainerStarted","Data":"f7b5c48677ea9e06fac08d757561709c77b36cab742fc1a2c9b0109414152ea9"} Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.937923 4845 generic.go:334] "Generic (PLEG): container finished" podID="01f1334f-a21c-4487-a1d6-dbecf7017c59" containerID="f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42" exitCode=0 Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.938000 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" event={"ID":"01f1334f-a21c-4487-a1d6-dbecf7017c59","Type":"ContainerDied","Data":"f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42"} Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.952089 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.968312 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:24 crc kubenswrapper[4845]: I0202 10:32:24.984011 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.001252 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.014612 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.034864 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.046760 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.058655 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.074854 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.086729 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.099060 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.113574 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.126278 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.138898 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.150242 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.160752 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.189601 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.227660 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.285471 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.315547 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.351627 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.402450 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.444599 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.477313 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.512470 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.554435 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.597815 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.637609 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.685616 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:31:30.973017089 +0000 UTC Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.712322 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.712355 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.712484 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:25 crc kubenswrapper[4845]: E0202 10:32:25.712584 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:25 crc kubenswrapper[4845]: E0202 10:32:25.712754 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:25 crc kubenswrapper[4845]: E0202 10:32:25.712919 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.925238 4845 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.928394 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.928451 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.928464 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.928591 4845 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.937804 4845 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.938209 4845 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.939623 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.939704 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.939752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.939805 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.939827 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:25Z","lastTransitionTime":"2026-02-02T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.944588 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" event={"ID":"01f1334f-a21c-4487-a1d6-dbecf7017c59","Type":"ContainerDied","Data":"57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0"} Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.944506 4845 generic.go:334] "Generic (PLEG): container finished" podID="01f1334f-a21c-4487-a1d6-dbecf7017c59" containerID="57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0" exitCode=0 Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.968904 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: E0202 10:32:25.969129 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.973426 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.973482 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.973501 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.973525 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.973543 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:25Z","lastTransitionTime":"2026-02-02T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:25 crc kubenswrapper[4845]: I0202 10:32:25.984533 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:25 crc kubenswrapper[4845]: E0202 10:32:25.991952 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.001107 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.003436 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.003537 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.003557 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.003581 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.003611 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.013056 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: E0202 10:32:26.019308 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.023675 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.023712 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.023721 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.023735 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.023745 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.028997 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: E0202 10:32:26.034816 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.038409 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.038450 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.038468 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.038491 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.038504 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.047679 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: E0202 10:32:26.050651 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: E0202 10:32:26.050758 4845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.056986 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.057018 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.057027 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.057040 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.057049 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.067128 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.081618 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.096138 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.162304 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.162334 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.162343 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.162355 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.162365 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.162829 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.175543 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.187173 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.202713 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.230386 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.264210 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.264247 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.264258 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.264274 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.264285 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.366857 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.366931 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.366945 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.366962 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.366974 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.469500 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.469542 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.469554 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.469568 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.469581 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.572807 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.572917 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.573071 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.573104 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.573126 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.604005 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.608785 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.617307 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.619501 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.630662 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.645862 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.667795 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.675969 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.676018 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.676031 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.676056 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.676070 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.680526 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.686349 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:12:55.948069757 +0000 UTC Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.690209 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.699649 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.708575 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.719996 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.752110 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.771733 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.778606 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.778654 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.778664 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.778679 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.778689 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.788130 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.800173 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.812737 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.852165 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.880834 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.880871 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.880954 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.880987 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.881000 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.891762 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.932950 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.952078 4845 generic.go:334] "Generic (PLEG): container finished" podID="01f1334f-a21c-4487-a1d6-dbecf7017c59" containerID="7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6" exitCode=0 Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.952163 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" event={"ID":"01f1334f-a21c-4487-a1d6-dbecf7017c59","Type":"ContainerDied","Data":"7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.959968 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1"} Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.972706 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.987120 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.987161 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.987170 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.987186 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:26 crc kubenswrapper[4845]: I0202 10:32:26.987196 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:26Z","lastTransitionTime":"2026-02-02T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:26 crc kubenswrapper[4845]: E0202 10:32:26.988054 4845 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.029423 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.072219 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.090191 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.090234 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.090250 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.090271 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.090286 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.111387 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.155061 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.194006 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.194057 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.194070 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.194088 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.194100 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.195129 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.233997 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.271337 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.297191 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.297224 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.297236 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.297254 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.297268 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.314500 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.349794 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.396584 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.400956 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.401226 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.401388 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.401691 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.401874 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.434207 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.471444 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.474972 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.475093 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.475133 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.475164 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.475189 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475265 4845 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475322 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:35.475304226 +0000 UTC m=+36.566705696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475663 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:32:35.475649086 +0000 UTC m=+36.567050546 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475754 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475777 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475792 4845 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475825 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:35.475816131 +0000 UTC m=+36.567217591 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475879 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475913 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475923 4845 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.475950 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:35.475941914 +0000 UTC m=+36.567343384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.476000 4845 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.476028 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:35.476020646 +0000 UTC m=+36.567422106 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.504521 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.504551 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.504561 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.504576 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.504587 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.515031 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.550619 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.592705 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.607241 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.607310 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.607333 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.607358 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.607377 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.631506 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.676377 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.687496 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:41:19.240909339 +0000 UTC Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.710442 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.710505 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.710525 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.710552 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.710569 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.711977 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.712060 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.712173 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.712163 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.712340 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:27 crc kubenswrapper[4845]: E0202 10:32:27.712531 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.716293 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.753166 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.815037 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.815081 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.815100 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.815117 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.815130 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.815337 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.840412 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.877286 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.911706 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.917397 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.917432 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.917444 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.917460 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.917472 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:27Z","lastTransitionTime":"2026-02-02T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.951749 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.971482 4845 generic.go:334] "Generic (PLEG): container finished" podID="01f1334f-a21c-4487-a1d6-dbecf7017c59" containerID="0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233" exitCode=0 Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.971552 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" event={"ID":"01f1334f-a21c-4487-a1d6-dbecf7017c59","Type":"ContainerDied","Data":"0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233"} Feb 02 10:32:27 crc kubenswrapper[4845]: I0202 10:32:27.989794 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.019970 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.020005 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.020014 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.020026 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.020035 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.029513 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.072059 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.114835 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.122110 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.122139 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.122149 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.122164 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.122176 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.159450 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.195010 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.225176 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.225210 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.225225 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.225247 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.225263 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.236173 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.276744 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.311439 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.327415 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.327447 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.327457 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.327471 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.327481 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.351664 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.392785 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.430660 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.430724 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.430735 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.430761 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.430803 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.432218 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.472429 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.514300 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.532855 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.532945 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.532959 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.532979 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.532992 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.555060 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.593835 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.629595 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.637031 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.637083 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.637102 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.637125 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.637145 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.687759 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:35:41.812495383 +0000 UTC Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.739663 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.739701 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.739719 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.739742 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.739760 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.810729 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.811770 4845 scope.go:117] "RemoveContainer" containerID="5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5" Feb 02 10:32:28 crc kubenswrapper[4845]: E0202 10:32:28.812050 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.843696 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.843747 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.843761 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.843784 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.843798 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.946838 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.946913 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.946927 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.946956 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.946972 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:28Z","lastTransitionTime":"2026-02-02T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.980415 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf"} Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.980748 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.980861 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:28 crc kubenswrapper[4845]: I0202 10:32:28.986818 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" event={"ID":"01f1334f-a21c-4487-a1d6-dbecf7017c59","Type":"ContainerStarted","Data":"a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.007245 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.017145 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.017213 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.025271 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.038965 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.053863 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.053956 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.053974 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.053999 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.054019 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.070858 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.091414 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.105930 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.119869 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.133611 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.147542 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.157542 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.157580 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.157591 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.157606 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.157618 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.164234 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.177757 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.189357 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.206607 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.217609 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.226146 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.260420 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.260803 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.260898 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.261006 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.261124 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.273509 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.310585 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.355152 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.368264 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.368305 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.368315 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.368333 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.368345 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.387324 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.429434 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.453243 4845 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.470245 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.470310 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.470330 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.470362 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.470382 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.472656 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.513176 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.551257 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.573189 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.573240 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.573252 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.573268 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.573279 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.600263 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.639807 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.676492 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.676572 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.676595 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.676624 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.676643 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.688678 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:22:51.184455895 +0000 UTC Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.711620 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.711645 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.711802 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:29 crc kubenswrapper[4845]: E0202 10:32:29.711790 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:29 crc kubenswrapper[4845]: E0202 10:32:29.712023 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:29 crc kubenswrapper[4845]: E0202 10:32:29.712211 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.777502 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.781654 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.781703 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.781715 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.781731 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.781742 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.806047 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.823024 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.841099 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.873267 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.887347 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.887385 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.887396 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.887409 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.887420 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.898431 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.919468 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.954752 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.990109 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.990151 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.990163 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.990180 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.990220 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:29Z","lastTransitionTime":"2026-02-02T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:29 crc kubenswrapper[4845]: I0202 10:32:29.991452 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.012089 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.036302 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.078567 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.093339 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.093411 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.093429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.093455 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.093483 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.117176 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.156093 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.196826 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.196920 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.196935 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.196957 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.196970 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.197108 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.237473 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.270535 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.300137 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.300193 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.300205 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.300224 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.300236 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.308816 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.356617 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.392417 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.403530 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.403582 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.403600 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.403626 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.403645 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.428437 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.506435 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.506496 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.506516 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.506539 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.506555 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.613342 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.613397 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.613415 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.613444 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.613461 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.689289 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:05:58.974251374 +0000 UTC Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.716591 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.716636 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.716652 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.716675 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.716692 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.819318 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.819360 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.819372 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.819530 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.819541 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.922787 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.922828 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.922839 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.922855 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:30 crc kubenswrapper[4845]: I0202 10:32:30.922866 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:30Z","lastTransitionTime":"2026-02-02T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.015017 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.025767 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.025802 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.025815 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.025832 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.025845 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.128827 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.128876 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.128904 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.128922 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.128933 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.231612 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.231678 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.231688 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.231702 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.231712 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.335071 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.335126 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.335142 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.335163 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.335174 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.438380 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.438435 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.438451 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.438472 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.438486 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.541367 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.541825 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.541839 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.541858 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.541896 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.645034 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.645096 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.645105 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.645119 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.645143 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.690387 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:31:16.791410722 +0000 UTC Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.712262 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.712271 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.712456 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:31 crc kubenswrapper[4845]: E0202 10:32:31.712631 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:31 crc kubenswrapper[4845]: E0202 10:32:31.712769 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:31 crc kubenswrapper[4845]: E0202 10:32:31.712929 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.748394 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.748436 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.748445 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.748459 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.748471 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.851418 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.851487 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.851498 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.851540 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.851556 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.954463 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.954524 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.954547 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.954571 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:31 crc kubenswrapper[4845]: I0202 10:32:31.954589 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:31Z","lastTransitionTime":"2026-02-02T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.022420 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/0.log" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.027400 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf" exitCode=1 Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.027453 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.028527 4845 scope.go:117] "RemoveContainer" containerID="81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.053034 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.057164 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.057345 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.057385 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.057550 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.057590 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.076406 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.099701 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.118410 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.135154 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.153789 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.160143 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.160171 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.160183 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.160198 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.160207 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.172398 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.184645 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.198219 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.219145 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:31Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:32:31.598437 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:31.598495 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:31.598528 6245 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 10:32:31.598541 6245 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:31.598581 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:32:31.598586 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:31.598588 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:32:31.598604 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:31.598627 6245 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:31.598660 6245 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:31.598674 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:32:31.598726 6245 factory.go:656] Stopping watch factory\\\\nI0202 10:32:31.598752 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.245217 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.258019 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.261955 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.261988 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.261997 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.262011 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.262020 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.269720 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.284933 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.297085 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.366286 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.366373 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.366387 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.366432 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.366444 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.469216 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.469257 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.469268 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.469283 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.469293 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.571931 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.571971 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.571983 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.571999 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.572010 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.674248 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.674304 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.674317 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.674331 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.674340 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.691505 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:30:24.33298604 +0000 UTC Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.776994 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.777027 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.777035 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.777061 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.777071 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.879497 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.879531 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.879556 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.879570 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.879579 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.982164 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.982194 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.982203 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.982216 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:32 crc kubenswrapper[4845]: I0202 10:32:32.982224 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:32Z","lastTransitionTime":"2026-02-02T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.032092 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/1.log" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.032736 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/0.log" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.035729 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d" exitCode=1 Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.035789 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.035846 4845 scope.go:117] "RemoveContainer" containerID="81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.036742 4845 scope.go:117] "RemoveContainer" containerID="85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d" Feb 02 10:32:33 crc kubenswrapper[4845]: E0202 10:32:33.036975 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.058614 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.074868 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.085009 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.085052 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.085069 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.085091 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.085107 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.089509 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.101109 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.115764 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.125968 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.142218 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.157283 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk"] Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.157795 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.159865 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.159942 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.159949 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.180171 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.188172 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.188311 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.188326 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.188341 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.188356 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.197492 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.212581 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.237498 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:31Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:32:31.598437 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:31.598495 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:31.598528 6245 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 10:32:31.598541 6245 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:31.598581 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:32:31.598586 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:31.598588 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:32:31.598604 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:31.598627 6245 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:31.598660 6245 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:31.598674 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:32:31.598726 6245 factory.go:656] Stopping watch factory\\\\nI0202 10:32:31.598752 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.250635 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.263932 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.277337 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.291540 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.291593 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.291606 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.291622 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.291634 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.291731 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.305213 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.320639 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.335379 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67f40dda-fb6a-490a-86a1-a14d6b183c8b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.335464 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdnv\" (UniqueName: \"kubernetes.io/projected/67f40dda-fb6a-490a-86a1-a14d6b183c8b-kube-api-access-zvdnv\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.335500 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67f40dda-fb6a-490a-86a1-a14d6b183c8b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.335517 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67f40dda-fb6a-490a-86a1-a14d6b183c8b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.347165 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.357492 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.370021 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.383155 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.394371 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.394428 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.394443 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.394462 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.394475 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.395847 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.410556 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.434297 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.437491 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvdnv\" (UniqueName: \"kubernetes.io/projected/67f40dda-fb6a-490a-86a1-a14d6b183c8b-kube-api-access-zvdnv\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.437783 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67f40dda-fb6a-490a-86a1-a14d6b183c8b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.437817 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67f40dda-fb6a-490a-86a1-a14d6b183c8b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.437858 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67f40dda-fb6a-490a-86a1-a14d6b183c8b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.438487 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67f40dda-fb6a-490a-86a1-a14d6b183c8b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.438980 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67f40dda-fb6a-490a-86a1-a14d6b183c8b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.447167 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.452821 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67f40dda-fb6a-490a-86a1-a14d6b183c8b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.458282 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvdnv\" (UniqueName: \"kubernetes.io/projected/67f40dda-fb6a-490a-86a1-a14d6b183c8b-kube-api-access-zvdnv\") pod \"ovnkube-control-plane-749d76644c-c2rmk\" (UID: \"67f40dda-fb6a-490a-86a1-a14d6b183c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.462597 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.475559 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.482157 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:31Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:32:31.598437 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:31.598495 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:31.598528 6245 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 10:32:31.598541 6245 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:31.598581 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:32:31.598586 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:31.598588 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:32:31.598604 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:31.598627 6245 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:31.598660 6245 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:31.598674 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:32:31.598726 6245 factory.go:656] Stopping watch factory\\\\nI0202 10:32:31.598752 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: W0202 10:32:33.490983 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f40dda_fb6a_490a_86a1_a14d6b183c8b.slice/crio-f0afaa9dd829ee2f0aa60a57b5c121c06decee396a0f4903ec001af7378c525f WatchSource:0}: Error finding container f0afaa9dd829ee2f0aa60a57b5c121c06decee396a0f4903ec001af7378c525f: Status 404 returned error can't find the container with id f0afaa9dd829ee2f0aa60a57b5c121c06decee396a0f4903ec001af7378c525f Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.496557 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.496603 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.496619 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.496642 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.496659 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.499987 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.525770 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.543589 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.598919 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.598970 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.598981 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.598998 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.599011 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.692183 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:17:00.055219903 +0000 UTC Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.701425 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.701459 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.701471 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.701484 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.701496 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.712087 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:33 crc kubenswrapper[4845]: E0202 10:32:33.712255 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.712492 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.712597 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:33 crc kubenswrapper[4845]: E0202 10:32:33.712673 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:33 crc kubenswrapper[4845]: E0202 10:32:33.712622 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.805144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.805218 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.805241 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.805265 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.805284 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.909059 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.909124 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.909142 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.909165 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:33 crc kubenswrapper[4845]: I0202 10:32:33.909182 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:33Z","lastTransitionTime":"2026-02-02T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.013023 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.013080 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.013098 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.013123 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.013140 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.043342 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/1.log" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.049097 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" event={"ID":"67f40dda-fb6a-490a-86a1-a14d6b183c8b","Type":"ContainerStarted","Data":"f0afaa9dd829ee2f0aa60a57b5c121c06decee396a0f4903ec001af7378c525f"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.123575 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.123628 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.123641 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.123657 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.123670 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.227015 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.227368 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.227381 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.227399 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.227410 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.330817 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.330872 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.330940 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.330968 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.330984 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.433587 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.433646 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.433658 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.433676 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.433688 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.535773 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.535813 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.535829 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.535849 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.535866 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.638700 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pmn9h"] Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.639581 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:34 crc kubenswrapper[4845]: E0202 10:32:34.639689 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.639752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.639802 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.639824 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.639851 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.639874 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.664139 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.681774 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.692779 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:55:51.802521599 +0000 UTC Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.696565 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.712189 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.733748 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.742701 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.742755 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.742775 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.742799 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.742816 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.749426 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.757795 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.757866 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5d9c\" (UniqueName: \"kubernetes.io/projected/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-kube-api-access-v5d9c\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.766163 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.789031 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:31Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:32:31.598437 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:31.598495 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:31.598528 6245 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 10:32:31.598541 6245 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:31.598581 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:32:31.598586 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:31.598588 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:32:31.598604 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:31.598627 6245 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:31.598660 6245 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:31.598674 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:32:31.598726 6245 factory.go:656] Stopping watch factory\\\\nI0202 10:32:31.598752 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.817053 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.832949 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.845939 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.846013 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.846036 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.846065 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.846094 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.850939 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.858945 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5d9c\" (UniqueName: \"kubernetes.io/projected/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-kube-api-access-v5d9c\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.859236 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:34 crc kubenswrapper[4845]: E0202 10:32:34.859445 4845 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:34 crc kubenswrapper[4845]: E0202 10:32:34.859566 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs podName:84cb7b66-62e7-4012-ab80-7c5e6ba51e35 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:35.359535398 +0000 UTC m=+36.450936878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs") pod "network-metrics-daemon-pmn9h" (UID: "84cb7b66-62e7-4012-ab80-7c5e6ba51e35") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.867246 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.879351 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5d9c\" (UniqueName: \"kubernetes.io/projected/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-kube-api-access-v5d9c\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.884819 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.900826 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.914432 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.931753 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.942661 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.948242 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.948451 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.948555 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.948671 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:34 crc kubenswrapper[4845]: I0202 10:32:34.948835 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:34Z","lastTransitionTime":"2026-02-02T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.051362 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.051401 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.051434 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.051455 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.051468 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.055180 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" event={"ID":"67f40dda-fb6a-490a-86a1-a14d6b183c8b","Type":"ContainerStarted","Data":"9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.055230 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" event={"ID":"67f40dda-fb6a-490a-86a1-a14d6b183c8b","Type":"ContainerStarted","Data":"286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.069686 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.081526 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.092076 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.106056 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.117875 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.133794 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.153801 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.153855 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.153869 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.153913 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.153930 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.155337 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:31Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:32:31.598437 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:31.598495 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:31.598528 6245 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 10:32:31.598541 6245 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:31.598581 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:32:31.598586 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:31.598588 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:32:31.598604 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:31.598627 6245 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:31.598660 6245 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:31.598674 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:32:31.598726 6245 factory.go:656] Stopping watch factory\\\\nI0202 10:32:31.598752 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.182246 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.201683 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.229835 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.247186 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.256092 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.256124 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.256135 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.256152 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.256162 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.260451 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.271726 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.283426 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.296481 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.305941 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.316428 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.358673 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.358708 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.358718 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.358732 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.358741 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.364235 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.364355 4845 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.364411 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs podName:84cb7b66-62e7-4012-ab80-7c5e6ba51e35 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:36.364398532 +0000 UTC m=+37.455799982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs") pod "network-metrics-daemon-pmn9h" (UID: "84cb7b66-62e7-4012-ab80-7c5e6ba51e35") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.460635 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.460715 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.460737 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.460767 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.460788 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.564399 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.564465 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.564483 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.564508 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.564525 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.566096 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.566263 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566286 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:32:51.566261724 +0000 UTC m=+52.657663294 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.566313 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.566362 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.566417 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566430 4845 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566464 4845 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566481 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:51.56647136 +0000 UTC m=+52.657872990 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566541 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:51.566526622 +0000 UTC m=+52.657928252 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566638 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566667 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566700 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566715 4845 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566772 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:51.566750668 +0000 UTC m=+52.658152118 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566672 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566800 4845 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.566856 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:51.566837411 +0000 UTC m=+52.658238861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.668430 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.668486 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.668499 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.668518 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.668533 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.693750 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:14:57.844607176 +0000 UTC Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.712091 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.712161 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.712228 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.712244 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.712367 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:35 crc kubenswrapper[4845]: E0202 10:32:35.712585 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.771196 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.771242 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.771251 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.771266 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.771275 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.874000 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.874059 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.874071 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.874108 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.874122 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.976360 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.976393 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.976402 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.976414 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:35 crc kubenswrapper[4845]: I0202 10:32:35.976443 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:35Z","lastTransitionTime":"2026-02-02T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.079273 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.079347 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.079367 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.079389 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.079403 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.182264 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.182350 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.182374 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.182403 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.182424 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.284783 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.284873 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.284908 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.284932 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.284949 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.375249 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.375401 4845 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.375492 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs podName:84cb7b66-62e7-4012-ab80-7c5e6ba51e35 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:38.375470463 +0000 UTC m=+39.466871913 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs") pod "network-metrics-daemon-pmn9h" (UID: "84cb7b66-62e7-4012-ab80-7c5e6ba51e35") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.387052 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.387120 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.387136 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.387159 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.387176 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.439300 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.439341 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.439352 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.439367 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.439378 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.453158 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.457698 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.457731 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.457741 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.457757 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.457768 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.471058 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.477659 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.477697 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.477709 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.477724 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.477737 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.490521 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.494710 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.494752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.494763 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.494777 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.494787 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.507284 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.511682 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.511726 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.511737 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.511754 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.511769 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.525121 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.525283 4845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.526927 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.526962 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.526973 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.526989 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.527001 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.628796 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.628840 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.628851 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.628867 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.628877 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.694403 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:09:29.015186306 +0000 UTC Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.711756 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:36 crc kubenswrapper[4845]: E0202 10:32:36.711947 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.731400 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.731447 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.731458 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.731473 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.731483 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.833833 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.833869 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.833878 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.833918 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.833936 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.935985 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.936032 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.936083 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.936108 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:36 crc kubenswrapper[4845]: I0202 10:32:36.936130 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:36Z","lastTransitionTime":"2026-02-02T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.038444 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.038528 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.038556 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.038580 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.038598 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.141867 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.141934 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.141944 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.141960 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.141970 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.245257 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.245302 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.245313 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.245330 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.245341 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.348047 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.348079 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.348088 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.348106 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.348119 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.450669 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.450711 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.450726 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.450745 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.450763 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.553654 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.553689 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.553699 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.553713 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.553724 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.657189 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.657247 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.657261 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.657280 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.657294 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.695183 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:09:00.700659985 +0000 UTC Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.712554 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.712653 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.712735 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:37 crc kubenswrapper[4845]: E0202 10:32:37.712723 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:37 crc kubenswrapper[4845]: E0202 10:32:37.712878 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:37 crc kubenswrapper[4845]: E0202 10:32:37.713014 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.759534 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.759586 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.759601 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.759622 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.759635 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.863242 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.863274 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.863282 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.863294 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.863303 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.966366 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.966452 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.966473 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.966498 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:37 crc kubenswrapper[4845]: I0202 10:32:37.966517 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:37Z","lastTransitionTime":"2026-02-02T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.069105 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.069177 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.069195 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.069219 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.069236 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.172683 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.172766 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.172790 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.172818 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.172839 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.284445 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.284507 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.284519 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.284536 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.284550 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.386841 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.386923 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.386941 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.386962 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.386976 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.395914 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:38 crc kubenswrapper[4845]: E0202 10:32:38.396173 4845 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:38 crc kubenswrapper[4845]: E0202 10:32:38.396287 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs podName:84cb7b66-62e7-4012-ab80-7c5e6ba51e35 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:42.396253563 +0000 UTC m=+43.487655053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs") pod "network-metrics-daemon-pmn9h" (UID: "84cb7b66-62e7-4012-ab80-7c5e6ba51e35") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.490107 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.490144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.490156 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.490172 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.490183 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.593293 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.593349 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.593367 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.593389 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.593406 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.695705 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:10:11.846007774 +0000 UTC Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.696198 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.696253 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.696305 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.696331 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.696347 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.711724 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:38 crc kubenswrapper[4845]: E0202 10:32:38.711843 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.799541 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.799597 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.799613 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.799633 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.799650 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.902160 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.902198 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.902209 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.902225 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:38 crc kubenswrapper[4845]: I0202 10:32:38.902235 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:38Z","lastTransitionTime":"2026-02-02T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.004714 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.004765 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.004784 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.004801 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.004814 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.107081 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.107138 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.107154 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.107178 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.107195 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.210871 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.210938 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.210953 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.210973 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.210985 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.313988 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.314025 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.314037 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.314062 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.314075 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.417013 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.417055 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.417064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.417079 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.417091 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.520165 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.520211 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.520226 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.520245 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.520258 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.622270 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.622328 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.622346 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.622373 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.622391 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.696068 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:25:45.786647213 +0000 UTC Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.712094 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.712145 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:39 crc kubenswrapper[4845]: E0202 10:32:39.712421 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:39 crc kubenswrapper[4845]: E0202 10:32:39.712558 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.713005 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:39 crc kubenswrapper[4845]: E0202 10:32:39.713117 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.725545 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.725646 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.725667 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.725731 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.725764 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.727873 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.741987 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.756050 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.770111 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.786036 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.804466 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.817757 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.827374 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.828622 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.828669 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.828681 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.828697 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.828708 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.842295 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.854749 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.865391 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.875336 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.885015 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.896385 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.910011 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.928332 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:31Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:32:31.598437 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:31.598495 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:31.598528 6245 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 10:32:31.598541 6245 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:31.598581 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:32:31.598586 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:31.598588 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:32:31.598604 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:31.598627 6245 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:31.598660 6245 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:31.598674 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:32:31.598726 6245 factory.go:656] Stopping watch factory\\\\nI0202 10:32:31.598752 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.931096 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.931144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.931161 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.931182 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.931196 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:39Z","lastTransitionTime":"2026-02-02T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:39 crc kubenswrapper[4845]: I0202 10:32:39.956743 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.033519 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.033572 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.033589 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.033610 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.033626 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.136596 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.136659 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.136682 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.136712 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.136733 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.239681 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.239818 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.239842 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.239872 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.239927 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.343199 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.343272 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.343297 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.343326 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.343347 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.446785 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.446859 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.446928 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.447022 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.447052 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.550620 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.550693 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.550717 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.550745 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.550769 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.652837 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.652983 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.653020 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.653048 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.653069 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.696384 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:01:46.230312422 +0000 UTC Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.712468 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:40 crc kubenswrapper[4845]: E0202 10:32:40.712602 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.755556 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.755583 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.755592 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.755607 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.755616 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.858723 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.858784 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.858800 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.858823 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.858841 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.961503 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.961690 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.961724 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.961757 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:40 crc kubenswrapper[4845]: I0202 10:32:40.961777 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:40Z","lastTransitionTime":"2026-02-02T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.065038 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.065103 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.065115 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.065133 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.065146 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.168673 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.168727 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.168737 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.168752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.168762 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.272681 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.272743 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.272760 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.272991 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.274382 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.378116 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.378182 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.378200 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.378221 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.378238 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.480670 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.480712 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.480721 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.480733 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.480742 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.583776 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.583828 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.583845 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.583859 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.583875 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.686983 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.687036 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.687047 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.687065 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.687076 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.696690 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:17:45.246058537 +0000 UTC Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.712183 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.712213 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.712211 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:41 crc kubenswrapper[4845]: E0202 10:32:41.712340 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:41 crc kubenswrapper[4845]: E0202 10:32:41.712494 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:41 crc kubenswrapper[4845]: E0202 10:32:41.712660 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.793537 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.794754 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.794772 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.794787 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.794797 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.897619 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.897668 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.897680 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.897697 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:41 crc kubenswrapper[4845]: I0202 10:32:41.897709 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:41Z","lastTransitionTime":"2026-02-02T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.001789 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.001923 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.001936 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.001955 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.001967 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.105373 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.105419 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.105429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.105445 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.105458 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.207757 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.208092 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.208105 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.208122 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.208133 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.311013 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.311076 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.311094 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.311120 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.311138 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.413657 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.413739 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.413763 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.413798 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.413821 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.445881 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:42 crc kubenswrapper[4845]: E0202 10:32:42.446169 4845 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:42 crc kubenswrapper[4845]: E0202 10:32:42.446310 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs podName:84cb7b66-62e7-4012-ab80-7c5e6ba51e35 nodeName:}" failed. No retries permitted until 2026-02-02 10:32:50.446277519 +0000 UTC m=+51.537678999 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs") pod "network-metrics-daemon-pmn9h" (UID: "84cb7b66-62e7-4012-ab80-7c5e6ba51e35") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.517105 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.517184 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.517209 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.517239 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.517332 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.619794 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.619835 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.619843 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.619856 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.619905 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.697615 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:12:38.640367769 +0000 UTC Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.711858 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.712373 4845 scope.go:117] "RemoveContainer" containerID="5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5" Feb 02 10:32:42 crc kubenswrapper[4845]: E0202 10:32:42.712716 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.724048 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.724079 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.724089 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.724121 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.724131 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.826394 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.826435 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.826453 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.826473 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.826485 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.929220 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.929273 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.929284 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.929304 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:42 crc kubenswrapper[4845]: I0202 10:32:42.929319 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:42Z","lastTransitionTime":"2026-02-02T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.032265 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.032304 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.032315 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.032330 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.032341 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.087379 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.089011 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.089461 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.103338 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.114227 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.123800 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.134722 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.135097 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.135136 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.135148 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.135165 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.135175 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.148410 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.161803 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.174516 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.193051 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81b434ba96232b8fa4880bdd4beca2db5c6a159643e207586ab3b6401ef09fdf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:31Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:32:31.598437 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:31.598495 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:31.598528 6245 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 10:32:31.598541 6245 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:31.598559 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:31.598581 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:32:31.598586 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:31.598588 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:32:31.598604 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:31.598627 6245 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:31.598660 6245 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:31.598674 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:32:31.598726 6245 factory.go:656] Stopping watch factory\\\\nI0202 10:32:31.598752 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.210646 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.224260 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.237418 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.237465 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.237477 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.237493 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.237504 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.238684 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.252584 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.269056 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.280293 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.290581 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.303279 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.312352 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.339714 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.339752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.339761 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.339775 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.339784 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.442666 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.442725 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.442743 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.442768 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.442785 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.545733 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.545777 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.545791 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.545808 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.545817 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.647995 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.648043 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.648053 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.648069 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.648109 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.698832 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:06:06.257532798 +0000 UTC Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.712214 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.712248 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:43 crc kubenswrapper[4845]: E0202 10:32:43.712344 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.712359 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:43 crc kubenswrapper[4845]: E0202 10:32:43.712473 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:43 crc kubenswrapper[4845]: E0202 10:32:43.712615 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.749978 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.750014 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.750023 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.750036 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.750045 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.853281 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.853328 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.853339 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.853357 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.853369 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.955840 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.955873 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.955904 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.955920 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:43 crc kubenswrapper[4845]: I0202 10:32:43.955930 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:43Z","lastTransitionTime":"2026-02-02T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.058266 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.058311 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.058320 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.058335 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.058345 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.161096 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.161143 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.161155 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.161172 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.161184 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.264129 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.264213 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.264234 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.264262 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.264286 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.368075 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.368135 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.368150 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.368172 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.368187 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.471507 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.471563 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.471580 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.471603 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.471621 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.575354 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.575416 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.575426 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.575468 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.575480 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.678656 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.678703 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.678716 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.678735 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.678750 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.699429 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:00:26.023101947 +0000 UTC Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.711863 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:44 crc kubenswrapper[4845]: E0202 10:32:44.712035 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.781854 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.781929 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.781945 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.781959 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.781970 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.885139 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.885182 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.885193 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.885211 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.885224 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.988568 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.988621 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.988639 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.988666 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:44 crc kubenswrapper[4845]: I0202 10:32:44.988683 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:44Z","lastTransitionTime":"2026-02-02T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.092049 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.092135 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.092154 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.092178 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.092197 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.194972 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.195024 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.195041 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.195064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.195081 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.298398 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.298464 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.298483 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.298510 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.298530 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.402105 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.402169 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.402187 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.402214 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.402235 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.505747 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.505808 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.505835 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.505863 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.505925 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.608344 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.608381 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.608392 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.608409 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.608422 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.699938 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:15:29.626326592 +0000 UTC Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.711604 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.711647 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.711722 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.711756 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.711771 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.711791 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.711808 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: E0202 10:32:45.712023 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.712098 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:45 crc kubenswrapper[4845]: E0202 10:32:45.712234 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:45 crc kubenswrapper[4845]: E0202 10:32:45.712369 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.814462 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.814533 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.814552 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.814577 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.814599 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.917397 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.917463 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.917485 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.917513 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:45 crc kubenswrapper[4845]: I0202 10:32:45.917534 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:45Z","lastTransitionTime":"2026-02-02T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.020468 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.020503 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.020513 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.020525 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.020535 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.123188 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.123259 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.123287 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.123317 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.123339 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.227380 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.227464 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.227487 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.227513 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.227533 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.330181 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.330221 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.330231 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.330244 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.330255 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.433011 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.433086 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.433100 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.433121 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.433135 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.536872 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.536958 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.536970 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.536988 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.537003 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.639533 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.639571 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.639582 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.639596 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.639608 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.700533 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:38:14.221496548 +0000 UTC Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.711790 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:46 crc kubenswrapper[4845]: E0202 10:32:46.711948 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.741754 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.741797 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.741806 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.741823 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.741834 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.756006 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.756042 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.756052 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.756106 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.756116 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: E0202 10:32:46.781276 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.790739 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.790809 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.790824 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.790871 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.790914 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: E0202 10:32:46.813460 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.816519 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.816540 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.816548 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.816572 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.816581 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: E0202 10:32:46.831608 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.835391 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.835423 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.835432 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.835446 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.835454 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: E0202 10:32:46.846076 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.849450 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.849479 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.849490 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.849505 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.849514 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: E0202 10:32:46.860526 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:46 crc kubenswrapper[4845]: E0202 10:32:46.860653 4845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.862407 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.862444 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.862455 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.862472 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.862484 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.964931 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.964972 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.964982 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.964996 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:46 crc kubenswrapper[4845]: I0202 10:32:46.965008 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:46Z","lastTransitionTime":"2026-02-02T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.067181 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.067222 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.067234 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.067246 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.067255 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.169161 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.169208 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.169221 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.169236 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.169246 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.271537 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.271585 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.271602 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.271622 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.271637 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.374323 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.374373 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.374384 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.374406 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.374436 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.476367 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.476399 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.476407 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.476419 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.476427 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.579149 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.579187 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.579199 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.579214 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.579223 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.681683 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.681725 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.681734 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.681748 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.681757 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.700961 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:00:21.197929818 +0000 UTC Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.712525 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.712578 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.712538 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:47 crc kubenswrapper[4845]: E0202 10:32:47.712948 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:47 crc kubenswrapper[4845]: E0202 10:32:47.713025 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:47 crc kubenswrapper[4845]: E0202 10:32:47.713091 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.713114 4845 scope.go:117] "RemoveContainer" containerID="85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.725063 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.738139 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.746327 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.756506 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.766120 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.774930 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.783468 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.783574 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.783600 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.783712 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.783815 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.785111 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.794751 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.803630 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.817190 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.828383 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.836492 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.855532 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.868452 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.879994 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.886635 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.886667 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.886678 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.886696 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.886707 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.894777 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.906615 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.917096 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:47Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.993189 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.993236 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.993246 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.993260 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:47 crc kubenswrapper[4845]: I0202 10:32:47.993270 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:47Z","lastTransitionTime":"2026-02-02T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.096672 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.096739 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.096758 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.096781 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.096798 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.109043 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/1.log" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.113007 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.113565 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.136092 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.149267 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.173091 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.187377 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.199217 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.199258 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.199271 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.199293 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.199307 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.207623 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.228191 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.246338 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.258718 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.269010 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.280870 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.299701 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.301411 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.301452 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.301463 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.301480 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.301491 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.314698 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.328202 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.350614 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.369821 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.383240 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.397434 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.404009 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.404041 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.404052 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.404067 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.404077 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.506732 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.506769 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.506779 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.506793 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.506803 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.609427 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.609476 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.609488 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.609509 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.609522 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.701528 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:16:56.392135491 +0000 UTC Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.711509 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:48 crc kubenswrapper[4845]: E0202 10:32:48.711637 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.712505 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.712540 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.712552 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.712563 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.712572 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.815662 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.815778 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.815809 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.815838 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.815862 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.919496 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.919628 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.919665 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.919693 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:48 crc kubenswrapper[4845]: I0202 10:32:48.919717 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:48Z","lastTransitionTime":"2026-02-02T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.022138 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.022202 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.022219 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.022242 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.022259 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.120404 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/2.log" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.121503 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/1.log" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.124609 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.124661 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.124677 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.124700 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.124717 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.125369 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77" exitCode=1 Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.125442 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.125555 4845 scope.go:117] "RemoveContainer" containerID="85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.128151 4845 scope.go:117] "RemoveContainer" containerID="939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77" Feb 02 10:32:49 crc kubenswrapper[4845]: E0202 10:32:49.128501 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.153767 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.167600 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.182056 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.204060 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.215979 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.229411 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.229554 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.229582 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.229699 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.229773 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.231064 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.246593 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.261967 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.278567 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.295144 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.320918 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.332937 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.333008 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.333033 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.333062 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.333085 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.347615 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.368673 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.399317 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.419208 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.436118 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.436194 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.436213 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.436236 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.436253 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.439124 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.462350 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.539957 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.540052 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.540064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.540086 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.540102 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.643422 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.643503 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.643526 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.643571 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.643610 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.702190 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:07:23.312234367 +0000 UTC Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.711720 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.711741 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.711991 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:49 crc kubenswrapper[4845]: E0202 10:32:49.712049 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:49 crc kubenswrapper[4845]: E0202 10:32:49.711858 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:49 crc kubenswrapper[4845]: E0202 10:32:49.712220 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.738731 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.746725 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.746981 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.746995 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.747011 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.747021 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.752843 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.763506 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.779808 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85af18ff654e362360063055e913234601da73cd85fa7c0c61b518cd1c420d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:32Z\\\",\\\"message\\\":\\\" 6365 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0202 10:32:32.819174 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-rzb6b\\\\nI0202 10:32:32.819204 6365 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-thbz4\\\\nF0202 10:32:32.819209 6365 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:32Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:32:32.819214 6365 obj_retry.g\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.792367 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.807707 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.819058 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.831584 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.839531 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.850015 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.850656 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.850694 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.850704 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.850716 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.850727 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.862086 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.872524 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.882783 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.893124 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.902745 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.913312 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.923108 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.952480 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.952530 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.952543 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.952561 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:49 crc kubenswrapper[4845]: I0202 10:32:49.952574 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:49Z","lastTransitionTime":"2026-02-02T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.055088 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.055132 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.055143 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.055158 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.055169 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.130135 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/2.log" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.134447 4845 scope.go:117] "RemoveContainer" containerID="939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77" Feb 02 10:32:50 crc kubenswrapper[4845]: E0202 10:32:50.134817 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.150376 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.157378 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.157422 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.157438 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.157483 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.157523 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.165699 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.181607 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.197364 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.213346 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.227311 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.241253 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.257446 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.261582 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.261643 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.261658 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.261683 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.261695 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.271088 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.283449 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.295851 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.305239 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.317375 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.348180 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.364431 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.364507 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.364531 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.364564 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.364588 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.379741 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.399064 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.415413 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.467113 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.467159 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.467170 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.467186 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.467198 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.547400 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:50 crc kubenswrapper[4845]: E0202 10:32:50.547567 4845 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:50 crc kubenswrapper[4845]: E0202 10:32:50.547624 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs podName:84cb7b66-62e7-4012-ab80-7c5e6ba51e35 nodeName:}" failed. No retries permitted until 2026-02-02 10:33:06.547607892 +0000 UTC m=+67.639009352 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs") pod "network-metrics-daemon-pmn9h" (UID: "84cb7b66-62e7-4012-ab80-7c5e6ba51e35") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.569360 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.569423 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.569440 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.569465 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.569482 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.673191 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.673259 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.673284 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.673315 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.673341 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.703004 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:21:27.916397137 +0000 UTC Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.712587 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:50 crc kubenswrapper[4845]: E0202 10:32:50.712842 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.776232 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.776680 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.776707 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.776736 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.776758 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.879680 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.879750 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.879769 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.879795 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.879812 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.985993 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.986063 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.986078 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.986103 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:50 crc kubenswrapper[4845]: I0202 10:32:50.986121 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:50Z","lastTransitionTime":"2026-02-02T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.089144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.089205 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.089225 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.089249 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.089267 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.191671 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.191750 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.191773 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.191805 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.191828 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.294733 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.294793 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.294809 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.294832 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.294846 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.397308 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.397371 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.397394 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.397422 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.397443 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.500459 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.500542 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.500564 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.500597 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.500620 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.603993 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.604046 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.604058 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.604075 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.604089 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.659597 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.659774 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.659801 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:33:23.659774733 +0000 UTC m=+84.751176183 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.659852 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.659923 4845 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.659945 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660015 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:33:23.659980499 +0000 UTC m=+84.751381979 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.660048 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660078 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660078 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660094 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660101 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660106 4845 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660112 4845 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660139 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:33:23.660132004 +0000 UTC m=+84.751533454 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660150 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:33:23.660145004 +0000 UTC m=+84.751546454 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660177 4845 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.660223 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:33:23.660210126 +0000 UTC m=+84.751611606 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.703545 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:18:55.512137756 +0000 UTC Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.706834 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.706920 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.706946 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.706972 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.706990 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.712109 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.712191 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.712205 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.712315 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.712398 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:51 crc kubenswrapper[4845]: E0202 10:32:51.712483 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.810314 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.810364 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.810380 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.810403 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.810419 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.913575 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.913611 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.913621 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.913636 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:51 crc kubenswrapper[4845]: I0202 10:32:51.913647 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:51Z","lastTransitionTime":"2026-02-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.016383 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.016449 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.016467 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.016493 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.016510 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.119576 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.119625 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.119638 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.119655 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.119668 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.222168 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.222205 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.222217 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.222232 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.222243 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.324687 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.324754 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.324773 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.324798 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.324817 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.429561 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.429623 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.429645 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.429670 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.429687 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.532280 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.532324 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.532336 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.532352 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.532365 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.636103 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.636176 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.636200 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.636238 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.636262 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.704098 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:49:25.02519245 +0000 UTC Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.712554 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:52 crc kubenswrapper[4845]: E0202 10:32:52.712756 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.739756 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.739836 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.739859 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.739927 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.739956 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.842442 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.842511 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.842533 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.842557 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.842574 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.945616 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.945688 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.945710 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.945741 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:52 crc kubenswrapper[4845]: I0202 10:32:52.945764 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:52Z","lastTransitionTime":"2026-02-02T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.009070 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.026004 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.033479 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.051170 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.051242 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.051266 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.051296 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.051318 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.054975 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.076224 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.096164 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.113854 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.130318 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.157643 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.157721 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.157744 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.157769 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.157790 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.164423 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.188337 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.208801 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.238979 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.257068 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.260589 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.260625 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.260645 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.260659 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.260670 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.273976 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.292747 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.309829 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.321113 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.338307 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.348521 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:53Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.363544 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.363649 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.363671 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.363698 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.363717 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.467272 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.467353 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.467377 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.467405 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.467428 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.569752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.569811 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.569831 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.569854 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.569871 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.673461 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.673584 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.673610 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.673638 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.673659 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.705240 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:24:20.432117415 +0000 UTC Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.712694 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.712768 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:53 crc kubenswrapper[4845]: E0202 10:32:53.712845 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.712859 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:53 crc kubenswrapper[4845]: E0202 10:32:53.713035 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:53 crc kubenswrapper[4845]: E0202 10:32:53.713117 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.776083 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.776163 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.776187 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.776217 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.776239 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.879323 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.879395 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.879545 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.879581 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.879604 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.983141 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.983209 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.983232 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.983259 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:53 crc kubenswrapper[4845]: I0202 10:32:53.983279 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:53Z","lastTransitionTime":"2026-02-02T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.086488 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.086546 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.086563 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.086589 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.086605 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.188940 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.188986 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.189000 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.189019 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.189033 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.292129 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.292198 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.292235 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.292274 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.292299 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.395615 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.395672 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.395690 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.395713 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.395732 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.498452 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.498510 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.498530 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.498556 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.498573 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.600774 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.600848 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.600866 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.600925 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.600943 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.704173 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.704262 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.704287 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.704315 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.704335 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.706406 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:30:33.883768755 +0000 UTC Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.711775 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:54 crc kubenswrapper[4845]: E0202 10:32:54.712014 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.808379 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.808481 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.808509 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.808546 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.808594 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.911342 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.911404 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.911421 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.911446 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:54 crc kubenswrapper[4845]: I0202 10:32:54.911463 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:54Z","lastTransitionTime":"2026-02-02T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.015044 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.015098 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.015115 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.015137 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.015153 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.117828 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.117910 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.117928 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.117950 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.117965 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.221300 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.221370 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.221388 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.221413 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.221435 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.324008 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.324083 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.324095 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.324112 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.324125 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.427409 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.427488 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.427508 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.427537 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.427562 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.530820 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.530939 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.530962 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.531026 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.531045 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.633710 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.633762 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.633776 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.633794 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.633808 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.706833 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:20:47.078988543 +0000 UTC Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.712172 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.712249 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:55 crc kubenswrapper[4845]: E0202 10:32:55.712315 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:55 crc kubenswrapper[4845]: E0202 10:32:55.712439 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.712490 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:55 crc kubenswrapper[4845]: E0202 10:32:55.712635 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.736698 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.736760 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.736777 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.736798 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.736815 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.840255 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.840326 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.840348 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.840376 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.840393 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.943229 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.943278 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.943293 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.943313 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:55 crc kubenswrapper[4845]: I0202 10:32:55.943407 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:55Z","lastTransitionTime":"2026-02-02T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.047946 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.048032 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.048063 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.048094 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.048114 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.152485 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.152526 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.152535 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.152551 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.152561 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.254973 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.255040 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.255062 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.255090 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.255112 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.357827 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.357926 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.357950 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.358015 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.358038 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.460764 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.460809 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.460823 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.460844 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.460859 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.562701 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.562743 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.562754 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.562771 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.562782 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.665740 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.665801 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.665819 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.665843 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.665860 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.707211 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:08:51.824010733 +0000 UTC Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.712599 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:56 crc kubenswrapper[4845]: E0202 10:32:56.712777 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.767941 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.767987 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.767999 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.768015 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.768028 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.871469 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.871540 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.871559 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.871585 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.871605 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.974369 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.974403 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.974411 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.974424 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:56 crc kubenswrapper[4845]: I0202 10:32:56.974433 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:56Z","lastTransitionTime":"2026-02-02T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.076934 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.077007 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.077025 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.077051 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.077069 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.179684 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.179731 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.179747 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.179768 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.179783 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.254394 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.254436 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.254448 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.254463 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.254475 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.275954 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:57Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.283677 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.283772 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.283787 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.283804 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.283823 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.302824 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:57Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.309935 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.309987 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.310005 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.310031 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.310048 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.329021 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:57Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.333744 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.333795 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.333810 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.333829 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.333842 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.348217 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:57Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.352308 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.352391 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.352417 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.352494 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.352519 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.372578 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:57Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.372768 4845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.374576 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.374603 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.374615 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.374631 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.374644 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.477806 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.477863 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.477881 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.477942 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.477960 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.580434 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.580496 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.580513 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.580537 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.580554 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.683535 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.683582 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.683601 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.683626 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.683646 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.708326 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:40:18.161965172 +0000 UTC Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.711608 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.711681 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.711691 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.711767 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.711826 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:57 crc kubenswrapper[4845]: E0202 10:32:57.711878 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.785802 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.785919 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.785945 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.785974 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.785996 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.888692 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.888755 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.888779 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.888810 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.888833 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.925916 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.948206 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:57Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.971451 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:57Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.993106 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:57Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.993651 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.993710 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.993735 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.993766 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:57 crc kubenswrapper[4845]: I0202 10:32:57.993788 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:57Z","lastTransitionTime":"2026-02-02T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.013096 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.030082 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.050444 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.069088 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.087317 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.096985 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.097046 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.097073 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.097103 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.097125 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.103802 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.136269 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.153342 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.170460 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.188510 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.200362 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.200423 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.200446 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.200478 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.200501 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.209099 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.230743 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.254457 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.285234 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.303584 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.303617 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.303628 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.303645 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.303656 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.319236 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.407231 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.407273 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.407283 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.407304 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.407318 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.509983 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.510048 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.510070 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.510100 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.510121 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.621242 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.621296 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.621315 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.621338 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.621355 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.709265 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:07:19.277626215 +0000 UTC Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.712545 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:32:58 crc kubenswrapper[4845]: E0202 10:32:58.712666 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.724230 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.724374 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.724397 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.724424 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.724443 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.831108 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.831236 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.831263 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.831292 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.831312 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.934224 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.934280 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.934296 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.934318 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:58 crc kubenswrapper[4845]: I0202 10:32:58.934337 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:58Z","lastTransitionTime":"2026-02-02T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.037966 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.038030 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.038049 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.038073 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.038090 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.141411 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.141473 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.141501 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.141546 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.141593 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.245044 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.245121 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.245144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.245173 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.245195 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.348243 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.348321 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.348347 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.348376 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.348400 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.451289 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.451345 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.451363 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.451387 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.451405 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.554657 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.554763 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.554789 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.554820 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.554844 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.657400 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.657446 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.657457 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.657474 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.657487 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.710057 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:09:01.018661852 +0000 UTC Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.711588 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:32:59 crc kubenswrapper[4845]: E0202 10:32:59.711709 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.711796 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.711834 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:32:59 crc kubenswrapper[4845]: E0202 10:32:59.711922 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:32:59 crc kubenswrapper[4845]: E0202 10:32:59.711981 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.728579 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.745452 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.760076 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.760121 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.760133 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.760152 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.760164 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.763273 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.779335 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.793282 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.811238 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.826105 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.836676 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.850766 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.860875 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.862024 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.862106 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.862121 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.862184 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.862199 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.876473 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.892305 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.906163 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.922469 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.945772 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.965367 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.965476 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.965502 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.965572 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.965595 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:32:59Z","lastTransitionTime":"2026-02-02T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.968010 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:32:59 crc kubenswrapper[4845]: I0202 10:32:59.983727 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:32:59Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.011134 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.069255 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.069328 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.069354 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.069385 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.069408 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.172474 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.172544 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.172564 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.172588 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.172607 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.275805 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.275873 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.275924 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.275955 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.276104 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.379291 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.379350 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.379369 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.379394 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.379410 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.490193 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.490264 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.490298 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.490339 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.490363 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.592986 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.593050 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.593074 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.593128 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.593154 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.696447 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.696485 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.696497 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.696512 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.696524 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.712045 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:53:05.161931636 +0000 UTC Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.712239 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:00 crc kubenswrapper[4845]: E0202 10:33:00.712394 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.800040 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.800094 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.800111 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.800137 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.800154 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.903028 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.903121 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.903147 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.903177 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:00 crc kubenswrapper[4845]: I0202 10:33:00.903198 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:00Z","lastTransitionTime":"2026-02-02T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.006140 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.006188 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.006197 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.006210 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.006218 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.108220 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.108312 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.108320 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.108335 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.108344 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.211428 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.211493 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.211517 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.211548 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.211570 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.314299 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.314432 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.314445 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.314464 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.314477 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.417429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.417503 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.417515 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.417530 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.417541 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.522458 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.522507 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.522518 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.522537 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.522548 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.625112 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.625168 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.625185 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.625208 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.625227 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.711829 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.711915 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.711829 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:01 crc kubenswrapper[4845]: E0202 10:33:01.712071 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.712169 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:02:51.17081876 +0000 UTC Feb 02 10:33:01 crc kubenswrapper[4845]: E0202 10:33:01.712269 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:01 crc kubenswrapper[4845]: E0202 10:33:01.712356 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.727101 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.727146 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.727161 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.727179 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.727193 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.829963 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.830021 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.830039 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.830067 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.830087 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.933014 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.933088 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.933106 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.933129 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:01 crc kubenswrapper[4845]: I0202 10:33:01.933148 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:01Z","lastTransitionTime":"2026-02-02T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.035928 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.035964 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.035973 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.036012 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.036031 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.137675 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.137708 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.137718 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.137733 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.137744 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.240248 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.240324 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.240344 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.240370 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.240447 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.342384 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.342439 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.342458 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.342481 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.342498 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.445548 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.445612 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.445621 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.445638 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.445648 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.548381 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.548409 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.548418 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.548430 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.548439 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.651151 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.651209 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.651225 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.651247 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.651264 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.732245 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:08:15.500226477 +0000 UTC Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.732417 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:02 crc kubenswrapper[4845]: E0202 10:33:02.732635 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.733684 4845 scope.go:117] "RemoveContainer" containerID="939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77" Feb 02 10:33:02 crc kubenswrapper[4845]: E0202 10:33:02.733994 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.760951 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.761008 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.761019 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.761037 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.761047 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.863323 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.863374 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.863388 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.863407 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.863426 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.966692 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.966736 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.966750 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.966768 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:02 crc kubenswrapper[4845]: I0202 10:33:02.966779 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:02Z","lastTransitionTime":"2026-02-02T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.069054 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.069102 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.069116 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.069132 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.069143 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.173041 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.173119 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.173144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.173172 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.173192 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.275647 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.275690 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.275701 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.275720 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.275732 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.379012 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.379058 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.379070 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.379089 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.379099 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.481844 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.481913 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.481924 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.481942 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.481956 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.584570 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.584611 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.584621 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.584634 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.584642 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.687102 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.687144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.687157 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.687172 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.687184 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.711902 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.712006 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.712133 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:03 crc kubenswrapper[4845]: E0202 10:33:03.712122 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:03 crc kubenswrapper[4845]: E0202 10:33:03.712230 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:03 crc kubenswrapper[4845]: E0202 10:33:03.712337 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.732672 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:44:19.041573 +0000 UTC Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.789826 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.789857 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.789868 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.789897 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.789907 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.892200 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.892265 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.892288 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.892312 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.892329 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.994760 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.994806 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.994817 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.994833 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:03 crc kubenswrapper[4845]: I0202 10:33:03.994847 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:03Z","lastTransitionTime":"2026-02-02T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.097947 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.097994 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.098005 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.098024 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.098037 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.201414 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.201473 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.201492 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.201517 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.201535 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.304505 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.304541 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.304551 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.304566 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.304577 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.407263 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.407302 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.407310 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.407325 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.407335 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.509068 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.509102 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.509112 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.509124 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.509132 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.612075 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.612127 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.612138 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.612157 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.612170 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.711726 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:04 crc kubenswrapper[4845]: E0202 10:33:04.711861 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.714645 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.714681 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.714693 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.714709 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.714722 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.732990 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:08:10.119291042 +0000 UTC Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.816811 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.816859 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.816873 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.816908 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.816922 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.919846 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.919927 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.919945 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.919969 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:04 crc kubenswrapper[4845]: I0202 10:33:04.919988 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:04Z","lastTransitionTime":"2026-02-02T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.022674 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.022722 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.022732 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.022747 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.022756 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.125836 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.125868 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.125880 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.125911 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.125922 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.228016 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.228074 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.228086 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.228102 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.228113 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.331141 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.331193 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.331202 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.331215 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.331224 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.433079 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.433126 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.433139 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.433157 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.433169 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.535757 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.535788 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.535797 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.535810 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.535819 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.638423 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.638466 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.638487 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.638505 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.638769 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.711873 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.711917 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:05 crc kubenswrapper[4845]: E0202 10:33:05.712017 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.712172 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:05 crc kubenswrapper[4845]: E0202 10:33:05.712223 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:05 crc kubenswrapper[4845]: E0202 10:33:05.712386 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.733629 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:16:35.856213689 +0000 UTC Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.742237 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.742272 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.742284 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.742300 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.742312 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.845439 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.845468 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.845476 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.845488 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.845496 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.948520 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.948572 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.948582 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.948598 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:05 crc kubenswrapper[4845]: I0202 10:33:05.948607 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:05Z","lastTransitionTime":"2026-02-02T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.051556 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.051586 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.051594 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.051608 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.051617 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.154573 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.154636 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.154659 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.154689 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.154712 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.257547 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.257595 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.257607 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.257624 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.257636 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.360064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.360100 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.360110 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.360125 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.360138 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.462707 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.462778 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.462802 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.462861 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.462923 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.565642 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.565733 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.565757 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.565783 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.565800 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.624406 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:06 crc kubenswrapper[4845]: E0202 10:33:06.624567 4845 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:33:06 crc kubenswrapper[4845]: E0202 10:33:06.624638 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs podName:84cb7b66-62e7-4012-ab80-7c5e6ba51e35 nodeName:}" failed. No retries permitted until 2026-02-02 10:33:38.624618489 +0000 UTC m=+99.716019939 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs") pod "network-metrics-daemon-pmn9h" (UID: "84cb7b66-62e7-4012-ab80-7c5e6ba51e35") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.668920 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.668971 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.668984 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.669004 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.669020 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.712549 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:06 crc kubenswrapper[4845]: E0202 10:33:06.712786 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.734104 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:42:21.230345551 +0000 UTC Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.771467 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.771517 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.771532 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.771549 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.771562 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.873925 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.873964 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.873974 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.873989 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.873999 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.976548 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.976594 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.976606 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.976621 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:06 crc kubenswrapper[4845]: I0202 10:33:06.976632 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:06Z","lastTransitionTime":"2026-02-02T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.079038 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.079090 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.079101 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.079119 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.079130 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.181792 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.181859 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.181881 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.181958 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.181982 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.284775 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.284839 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.284859 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.284876 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.284904 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.387354 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.387404 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.387413 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.387429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.387440 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.489743 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.489782 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.489792 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.489807 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.489816 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.592521 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.592580 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.592598 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.592620 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.592638 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.686672 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.686713 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.686721 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.686735 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.686746 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.705705 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.712338 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.712406 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.712462 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.712498 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.712655 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.712721 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.712739 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.712810 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.712842 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.713074 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.713213 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.726733 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.733498 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.734655 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:07:17.037149988 +0000 UTC Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.737576 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.737748 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.737918 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.738058 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.738193 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.755411 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.759609 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.759640 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.759649 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.759663 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.759673 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.775183 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.779079 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.779376 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.779557 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.779736 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.779961 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.797076 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:07 crc kubenswrapper[4845]: E0202 10:33:07.797324 4845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.799375 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.799421 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.799431 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.799448 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.799459 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.901948 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.902008 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.902025 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.902048 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:07 crc kubenswrapper[4845]: I0202 10:33:07.902067 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:07Z","lastTransitionTime":"2026-02-02T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.005640 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.005689 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.005703 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.005720 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.005734 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.107775 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.107810 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.107823 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.107838 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.107849 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.210369 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.210409 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.210435 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.210450 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.210460 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.312946 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.312986 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.312998 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.313012 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.313021 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.415709 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.415859 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.415909 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.415942 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.415962 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.518358 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.518407 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.518415 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.518429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.518439 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.621325 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.621349 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.621358 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.621370 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.621378 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.712359 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:08 crc kubenswrapper[4845]: E0202 10:33:08.712536 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.724336 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.724406 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.724429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.724457 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.724480 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.736870 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:18:50.192625871 +0000 UTC Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.827000 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.827055 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.827074 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.827100 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.827117 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.929863 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.930071 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.930098 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.930126 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:08 crc kubenswrapper[4845]: I0202 10:33:08.930145 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:08Z","lastTransitionTime":"2026-02-02T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.033174 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.033222 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.033234 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.033250 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.033260 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.136196 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.136236 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.136245 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.136261 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.136271 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.198855 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/0.log" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.198916 4845 generic.go:334] "Generic (PLEG): container finished" podID="310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3" containerID="2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a" exitCode=1 Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.198941 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzwst" event={"ID":"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3","Type":"ContainerDied","Data":"2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.199282 4845 scope.go:117] "RemoveContainer" containerID="2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.216938 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:08Z\\\",\\\"message\\\":\\\"2026-02-02T10:32:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574\\\\n2026-02-02T10:32:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574 to /host/opt/cni/bin/\\\\n2026-02-02T10:32:23Z [verbose] multus-daemon started\\\\n2026-02-02T10:32:23Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:33:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.229975 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.241328 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.241362 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.241372 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.241387 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.241397 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.243821 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.255045 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.268193 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.280227 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.295037 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.312333 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.346778 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.347020 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.347165 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.347313 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.347385 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.351658 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.383768 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.397517 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.409927 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.422998 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.435779 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.448905 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.449519 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.449627 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.449709 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.449797 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.449899 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.463595 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.473750 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.485721 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.498057 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0555b7db-8395-4133-bcbb-cc7b913aa8d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.553020 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.553067 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.553080 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.553097 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.553111 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.655306 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.655351 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.655360 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.655374 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.655383 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.712098 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.712113 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:09 crc kubenswrapper[4845]: E0202 10:33:09.712210 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:09 crc kubenswrapper[4845]: E0202 10:33:09.712297 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.712113 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:09 crc kubenswrapper[4845]: E0202 10:33:09.712618 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.728164 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.737802 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:01:53.985907649 +0000 UTC Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.740824 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.757831 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.757864 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.757872 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.757903 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.757912 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.759594 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:08Z\\\",\\\"message\\\":\\\"2026-02-02T10:32:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574\\\\n2026-02-02T10:32:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574 to /host/opt/cni/bin/\\\\n2026-02-02T10:32:23Z [verbose] multus-daemon started\\\\n2026-02-02T10:32:23Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:33:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.776483 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.789304 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.801229 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.823865 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.840687 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.855842 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.859840 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.860010 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.860076 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.860147 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.860208 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.878203 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.891277 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.903999 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.915707 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.928716 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.939262 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0555b7db-8395-4133-bcbb-cc7b913aa8d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.951686 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.965072 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.965126 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.965143 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.965165 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.965180 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:09Z","lastTransitionTime":"2026-02-02T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.965923 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.982566 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:09 crc kubenswrapper[4845]: I0202 10:33:09.993113 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.067176 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.067218 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.067230 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.067246 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.067258 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.169075 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.169116 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.169125 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.169142 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.169177 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.202708 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/0.log" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.202756 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzwst" event={"ID":"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3","Type":"ContainerStarted","Data":"a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.220061 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.236224 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.252308 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.265455 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.271406 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.271454 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.271483 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.271510 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.271529 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.277626 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.292212 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.304226 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.315838 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.328231 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0555b7db-8395-4133-bcbb-cc7b913aa8d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.342187 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:08Z\\\",\\\"message\\\":\\\"2026-02-02T10:32:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574\\\\n2026-02-02T10:32:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574 to /host/opt/cni/bin/\\\\n2026-02-02T10:32:23Z [verbose] multus-daemon started\\\\n2026-02-02T10:32:23Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:33:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.353188 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.363780 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.374123 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.374157 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.374167 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.374180 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.374190 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.375111 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.384807 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.395029 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.406518 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.433481 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.462302 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.476108 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.476145 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.476155 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.476170 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.476180 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.479452 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.578998 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.579062 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.579133 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.579191 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.579210 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.681349 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.681377 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.681385 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.681398 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.681409 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.712108 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:10 crc kubenswrapper[4845]: E0202 10:33:10.712355 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.738705 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:36:59.923255645 +0000 UTC Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.784639 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.784693 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.784711 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.784735 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.784751 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.887202 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.887259 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.887275 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.887296 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.887309 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.989793 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.989832 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.989840 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.989853 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:10 crc kubenswrapper[4845]: I0202 10:33:10.989862 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:10Z","lastTransitionTime":"2026-02-02T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.092221 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.092299 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.092371 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.092447 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.092474 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.195518 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.195560 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.195572 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.195588 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.195599 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.298252 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.298304 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.298322 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.298344 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.298360 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.401182 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.401228 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.401244 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.401267 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.401284 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.504423 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.504465 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.504474 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.504488 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.504498 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.607048 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.607122 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.607140 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.607163 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.607180 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.710841 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.710948 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.710974 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.711006 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.711027 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.711827 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.711830 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.711831 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:11 crc kubenswrapper[4845]: E0202 10:33:11.712157 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:11 crc kubenswrapper[4845]: E0202 10:33:11.712004 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:11 crc kubenswrapper[4845]: E0202 10:33:11.712233 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.739064 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:34:28.252021605 +0000 UTC Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.813566 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.813628 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.813645 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.813673 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.813689 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.916101 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.916140 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.916191 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.916224 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:11 crc kubenswrapper[4845]: I0202 10:33:11.916237 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:11Z","lastTransitionTime":"2026-02-02T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.019015 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.019083 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.019098 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.019114 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.019126 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.121499 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.121578 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.121595 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.121650 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.121667 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.224624 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.224686 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.224705 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.224728 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.224746 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.326841 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.326900 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.326913 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.326931 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.326944 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.430271 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.430315 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.430330 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.430350 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.430364 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.532866 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.532918 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.532934 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.532952 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.532963 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.635721 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.635771 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.635788 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.635811 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.635827 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.711529 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:12 crc kubenswrapper[4845]: E0202 10:33:12.711658 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.738840 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.738900 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.738909 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.738926 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.738934 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.739355 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:32:02.624850816 +0000 UTC Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.842592 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.842662 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.842680 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.842705 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.842722 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.945662 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.945725 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.945737 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.945752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:12 crc kubenswrapper[4845]: I0202 10:33:12.945763 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:12Z","lastTransitionTime":"2026-02-02T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.047772 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.047816 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.047826 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.047844 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.047855 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.150407 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.150470 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.150487 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.150512 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.150531 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.252611 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.252675 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.252689 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.252708 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.252720 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.355236 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.355285 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.355302 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.355321 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.355335 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.458356 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.458409 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.458427 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.458449 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.458466 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.560769 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.560816 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.560828 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.560850 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.560861 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.664087 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.664151 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.664166 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.664182 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.664195 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.711926 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.711962 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.712033 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:13 crc kubenswrapper[4845]: E0202 10:33:13.712110 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:13 crc kubenswrapper[4845]: E0202 10:33:13.712189 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:13 crc kubenswrapper[4845]: E0202 10:33:13.712343 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.740217 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:02:56.150140507 +0000 UTC Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.767002 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.767060 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.767081 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.767109 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.767126 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.869988 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.870085 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.870114 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.870140 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.870160 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.980983 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.981030 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.981038 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.981051 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:13 crc kubenswrapper[4845]: I0202 10:33:13.981061 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:13Z","lastTransitionTime":"2026-02-02T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.084154 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.084214 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.084231 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.084254 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.084271 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.189587 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.189657 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.189680 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.189709 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.189733 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.292726 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.292779 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.292796 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.292817 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.292834 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.396265 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.396318 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.396335 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.396358 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.396376 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.498667 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.498781 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.498806 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.498836 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.498858 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.601193 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.601245 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.601260 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.601279 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.601291 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.703958 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.704235 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.704341 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.704478 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.704576 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.711691 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:14 crc kubenswrapper[4845]: E0202 10:33:14.711802 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.712803 4845 scope.go:117] "RemoveContainer" containerID="939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.741116 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:52:13.181326718 +0000 UTC Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.807535 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.807879 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.808085 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.808262 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.808421 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.912021 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.912081 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.912098 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.912123 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:14 crc kubenswrapper[4845]: I0202 10:33:14.912140 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:14Z","lastTransitionTime":"2026-02-02T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.015294 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.015637 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.015951 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.016089 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.016196 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.119852 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.119948 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.119966 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.119990 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.120006 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.219502 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/2.log" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.221576 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.221848 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.222100 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.222308 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.222455 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.224540 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.225288 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.247607 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.279694 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.300127 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.325253 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.325294 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.325305 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.325319 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.325328 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.329791 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:33:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.345505 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.375321 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.395535 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.417146 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.427231 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.427260 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.427271 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.427299 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.427307 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.428894 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0555b7db-8395-4133-bcbb-cc7b913aa8d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.439998 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.452358 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.466598 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.476426 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.490691 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.500546 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.514194 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:08Z\\\",\\\"message\\\":\\\"2026-02-02T10:32:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574\\\\n2026-02-02T10:32:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574 to /host/opt/cni/bin/\\\\n2026-02-02T10:32:23Z [verbose] multus-daemon started\\\\n2026-02-02T10:32:23Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:33:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.529312 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.529349 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.529357 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.529371 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.529380 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.529766 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.540782 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.552393 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.632317 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.632357 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.632371 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.632388 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.632398 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.712017 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:15 crc kubenswrapper[4845]: E0202 10:33:15.712232 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.712615 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:15 crc kubenswrapper[4845]: E0202 10:33:15.712775 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.713253 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:15 crc kubenswrapper[4845]: E0202 10:33:15.713403 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.736001 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.736112 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.736136 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.736168 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.736189 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.742478 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:12:33.186604144 +0000 UTC Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.839214 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.839288 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.839307 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.839807 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.839866 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.943638 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.943728 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.943746 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.943772 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:15 crc kubenswrapper[4845]: I0202 10:33:15.943788 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:15Z","lastTransitionTime":"2026-02-02T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.047420 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.047826 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.048109 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.048299 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.048460 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.151373 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.151429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.151446 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.151467 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.151483 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.230735 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/3.log" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.231783 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/2.log" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.236271 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a" exitCode=1 Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.236335 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.236400 4845 scope.go:117] "RemoveContainer" containerID="939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.237534 4845 scope.go:117] "RemoveContainer" containerID="b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a" Feb 02 10:33:16 crc kubenswrapper[4845]: E0202 10:33:16.237915 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.254301 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.254367 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.254385 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.254409 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.254426 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.273628 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.295943 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.316581 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.351117 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://939954f04b6266a3da83d0b6bbe32dd51b52c2121163e582010da34e5023fe77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:32:48Z\\\",\\\"message\\\":\\\"moval\\\\nI0202 10:32:48.630958 6609 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 10:32:48.630917 6609 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:32:48.631048 6609 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:32:48.631090 6609 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:32:48.631121 6609 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:32:48.631188 6609 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:32:48.631221 6609 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:32:48.631268 6609 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:32:48.631400 6609 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631470 6609 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:32:48.631530 6609 factory.go:656] Stopping watch factory\\\\nI0202 10:32:48.631542 6609 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 10:32:48.631633 6609 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:32:48.631565 6609 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 10:32:48.631714 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:32:48.631814 6609 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:15Z\\\",\\\"message\\\":\\\":303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771107 7022 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771126 7022 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd in node crc\\\\nI0202 10:33:15.771141 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd after 0 failed attempt(s)\\\\nI0202 10:33:15.771152 7022 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771151 7022 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-xdtrh\\\\nI0202 10:33:15.771173 7022 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-xdtrh\\\\nI0202 10:33:15.771184 7022 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-xdtrh in node crc\\\\nF0202 10:33:15.771184 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped alrea\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:33:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.357967 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.358025 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.358042 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.358082 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.358100 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.371157 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.392793 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.412515 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.435581 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.450918 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.461228 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.461276 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.461293 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.461317 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.461334 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.465736 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.481032 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0555b7db-8395-4133-bcbb-cc7b913aa8d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.498188 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.517757 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.533881 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.554332 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.565139 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.565198 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.565217 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.565242 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.565260 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.574746 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.592621 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.615447 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:08Z\\\",\\\"message\\\":\\\"2026-02-02T10:32:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574\\\\n2026-02-02T10:32:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574 to /host/opt/cni/bin/\\\\n2026-02-02T10:32:23Z [verbose] multus-daemon started\\\\n2026-02-02T10:32:23Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:33:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.633006 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.669067 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.669150 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.669171 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.669196 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.669214 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.711638 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:16 crc kubenswrapper[4845]: E0202 10:33:16.711827 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.743447 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 00:28:04.908610689 +0000 UTC Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.772400 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.772447 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.772460 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.772479 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.772496 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.875116 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.875211 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.875240 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.875298 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.875323 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.980727 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.980802 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.980825 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.980859 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:16 crc kubenswrapper[4845]: I0202 10:33:16.980922 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:16Z","lastTransitionTime":"2026-02-02T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.084152 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.084250 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.084280 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.084318 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.084346 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.186957 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.187002 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.187011 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.187026 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.187039 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.242660 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/3.log" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.247404 4845 scope.go:117] "RemoveContainer" containerID="b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a" Feb 02 10:33:17 crc kubenswrapper[4845]: E0202 10:33:17.247718 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.261777 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.276020 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0555b7db-8395-4133-bcbb-cc7b913aa8d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.290542 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.290614 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.290635 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.290662 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.290681 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.294028 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.307652 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.322602 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.339261 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.350048 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.360693 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.373257 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:08Z\\\",\\\"message\\\":\\\"2026-02-02T10:32:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574\\\\n2026-02-02T10:32:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574 to /host/opt/cni/bin/\\\\n2026-02-02T10:32:23Z [verbose] multus-daemon started\\\\n2026-02-02T10:32:23Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:33:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.382821 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.393958 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.394345 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.394389 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.394423 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.394443 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.394455 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.406632 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.427416 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.440825 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.455727 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.483033 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:15Z\\\",\\\"message\\\":\\\":303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771107 7022 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771126 7022 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd in node crc\\\\nI0202 10:33:15.771141 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd after 0 failed attempt(s)\\\\nI0202 10:33:15.771152 7022 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771151 7022 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-xdtrh\\\\nI0202 10:33:15.771173 7022 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-xdtrh\\\\nI0202 10:33:15.771184 7022 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-xdtrh in node crc\\\\nF0202 10:33:15.771184 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped alrea\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:33:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.497668 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.498040 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.498095 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.498106 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.498120 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.498132 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.545197 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.558279 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.601234 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.601264 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.601273 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.601286 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.601298 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.703733 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.703769 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.703780 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.703795 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.703807 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.711696 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.711835 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:17 crc kubenswrapper[4845]: E0202 10:33:17.711938 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:17 crc kubenswrapper[4845]: E0202 10:33:17.712004 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.711746 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:17 crc kubenswrapper[4845]: E0202 10:33:17.712210 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.743997 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:03:15.562262504 +0000 UTC Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.806620 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.806673 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.806687 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.806703 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.806716 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.908716 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.908761 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.908770 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.908787 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.908799 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.913789 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.913847 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.913865 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.913925 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.913942 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: E0202 10:33:17.933794 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.938155 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.938189 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.938197 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.938211 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.938221 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: E0202 10:33:17.955018 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.960398 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.960458 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.960470 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.960485 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.960496 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: E0202 10:33:17.978208 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.981576 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.981609 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.981620 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.981635 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:17 crc kubenswrapper[4845]: I0202 10:33:17.981645 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:17Z","lastTransitionTime":"2026-02-02T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:17 crc kubenswrapper[4845]: E0202 10:33:17.992925 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.001635 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.001691 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.001710 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.001734 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.001754 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: E0202 10:33:18.014429 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:18Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:18 crc kubenswrapper[4845]: E0202 10:33:18.014714 4845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.016708 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.016752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.016769 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.016791 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.016806 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.119909 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.119958 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.119970 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.119987 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.119998 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.223026 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.223056 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.223065 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.223076 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.223085 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.326617 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.326681 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.326694 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.326713 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.326726 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.429307 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.429400 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.429453 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.429480 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.429498 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.531126 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.531178 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.531189 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.531207 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.531218 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.634524 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.634587 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.634603 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.634626 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.634644 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.712237 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:18 crc kubenswrapper[4845]: E0202 10:33:18.712412 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.737505 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.737556 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.737578 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.737604 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.737625 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.745020 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:17:37.50405896 +0000 UTC Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.839937 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.839973 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.839983 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.840002 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.840012 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.943525 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.943580 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.943601 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.943630 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:18 crc kubenswrapper[4845]: I0202 10:33:18.943650 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:18Z","lastTransitionTime":"2026-02-02T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.046848 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.046954 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.046978 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.047007 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.047029 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.150417 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.150479 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.150495 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.150520 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.150536 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.253217 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.253272 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.253287 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.253309 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.253325 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.356069 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.356135 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.356144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.356156 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.356165 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.459255 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.459322 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.459339 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.459364 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.459379 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.562348 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.562714 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.562882 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.563106 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.563258 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.666724 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.666756 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.666772 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.666789 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.666802 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.712086 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.712088 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:19 crc kubenswrapper[4845]: E0202 10:33:19.712341 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:19 crc kubenswrapper[4845]: E0202 10:33:19.712224 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.712108 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:19 crc kubenswrapper[4845]: E0202 10:33:19.712429 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.728196 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.741277 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.745182 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:01:28.621247827 +0000 UTC Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.763400 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.768487 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.768517 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.768528 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.768542 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.768554 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.775532 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.787075 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.802913 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0555b7db-8395-4133-bcbb-cc7b913aa8d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.817092 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:08Z\\\",\\\"message\\\":\\\"2026-02-02T10:32:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574\\\\n2026-02-02T10:32:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574 to /host/opt/cni/bin/\\\\n2026-02-02T10:32:23Z [verbose] multus-daemon started\\\\n2026-02-02T10:32:23Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:33:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.831369 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.847412 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.863484 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.870183 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.870230 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.870240 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.870254 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.870263 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.878447 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.889950 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.906360 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.936726 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:15Z\\\",\\\"message\\\":\\\":303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771107 7022 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771126 7022 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd in node crc\\\\nI0202 10:33:15.771141 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd after 0 failed attempt(s)\\\\nI0202 10:33:15.771152 7022 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771151 7022 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-xdtrh\\\\nI0202 10:33:15.771173 7022 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-xdtrh\\\\nI0202 10:33:15.771184 7022 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-xdtrh in node crc\\\\nF0202 10:33:15.771184 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped alrea\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:33:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.959576 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.972566 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.972612 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.972624 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.972640 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.972651 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:19Z","lastTransitionTime":"2026-02-02T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.977027 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:19 crc kubenswrapper[4845]: I0202 10:33:19.992434 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.013038 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.036946 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.075531 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.075581 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.075608 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.075625 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.075636 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.177368 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.177446 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.177472 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.177502 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.177523 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.280007 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.280078 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.280091 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.280111 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.280124 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.383358 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.383433 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.383457 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.383491 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.383514 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.486784 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.486834 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.486850 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.486871 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.486913 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.590033 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.590078 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.590094 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.590117 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.590134 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.693008 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.693078 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.693097 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.693123 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.693141 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.712359 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:20 crc kubenswrapper[4845]: E0202 10:33:20.712597 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.746120 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:03:22.232770698 +0000 UTC Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.795993 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.796061 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.796083 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.796115 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.796137 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.899268 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.899333 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.899351 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.899374 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:20 crc kubenswrapper[4845]: I0202 10:33:20.899391 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:20Z","lastTransitionTime":"2026-02-02T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.002823 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.002874 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.002927 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.002951 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.002968 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.106191 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.106244 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.106265 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.106290 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.106312 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.209144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.209217 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.209241 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.209269 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.209291 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.312324 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.312381 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.312399 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.312421 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.312438 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.415453 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.415524 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.415554 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.415587 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.415609 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.519127 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.519173 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.519190 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.519207 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.519217 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.622141 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.622222 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.622240 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.622266 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.622282 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.712139 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.712225 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:21 crc kubenswrapper[4845]: E0202 10:33:21.712313 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.712380 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:21 crc kubenswrapper[4845]: E0202 10:33:21.712530 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:21 crc kubenswrapper[4845]: E0202 10:33:21.712724 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.725161 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.725194 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.725206 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.725219 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.725228 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.746560 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:39:48.848883117 +0000 UTC Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.829439 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.829489 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.829503 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.829525 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.829537 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.932359 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.932400 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.932416 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.932434 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:21 crc kubenswrapper[4845]: I0202 10:33:21.932447 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:21Z","lastTransitionTime":"2026-02-02T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.036257 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.036326 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.036348 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.036375 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.036396 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.141774 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.141823 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.141837 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.141854 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.141869 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.244709 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.244745 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.244754 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.244768 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.244778 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.348144 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.348214 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.348240 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.348271 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.348292 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.451689 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.452313 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.452333 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.452347 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.452356 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.555956 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.556047 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.556072 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.556099 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.556120 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.659342 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.659428 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.659462 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.659506 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.659548 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.712352 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:22 crc kubenswrapper[4845]: E0202 10:33:22.712541 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.747721 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:33:43.553612973 +0000 UTC Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.762140 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.762176 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.762184 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.762197 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.762206 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.865157 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.865233 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.865256 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.865285 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.865306 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.968559 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.968652 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.968681 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.968706 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:22 crc kubenswrapper[4845]: I0202 10:33:22.968724 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:22Z","lastTransitionTime":"2026-02-02T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.072353 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.072429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.072455 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.072484 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.072505 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.175713 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.175777 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.175789 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.175808 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.175822 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.278257 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.278295 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.278306 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.278321 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.278331 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.381248 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.381311 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.381327 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.381350 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.381367 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.484806 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.484860 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.484877 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.484928 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.484945 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.588339 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.588400 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.588419 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.588442 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.588459 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.691492 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.691558 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.691577 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.691602 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.691620 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.700202 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.700335 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700408 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.70037815 +0000 UTC m=+148.791779630 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700480 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700504 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700523 4845 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700579 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.700562066 +0000 UTC m=+148.791963556 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700604 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700628 4845 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700644 4845 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.700496 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700695 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.70068023 +0000 UTC m=+148.792081720 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.700782 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.700832 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.700945 4845 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.701003 4845 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.701013 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.700995299 +0000 UTC m=+148.792396779 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.701116 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.701090112 +0000 UTC m=+148.792491592 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.712475 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.712515 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.712523 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.712630 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.712759 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:23 crc kubenswrapper[4845]: E0202 10:33:23.713018 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.748506 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:38:03.137750613 +0000 UTC Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.794240 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.794298 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.794315 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.794339 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.794357 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.896126 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.896171 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.896180 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.896192 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.896201 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.998947 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.999011 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.999028 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.999053 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:23 crc kubenswrapper[4845]: I0202 10:33:23.999070 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:23Z","lastTransitionTime":"2026-02-02T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.102190 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.102250 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.102270 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.102293 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.102310 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.205232 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.205310 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.205350 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.205382 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.205406 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.308251 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.308304 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.308314 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.308332 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.308344 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.410952 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.411032 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.411050 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.411074 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.411092 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.514440 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.514492 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.514504 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.514527 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.514541 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.618356 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.618403 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.618413 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.618429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.618442 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.712438 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:24 crc kubenswrapper[4845]: E0202 10:33:24.712605 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.721332 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.721393 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.721408 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.721429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.721443 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.749103 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:25:20.058714981 +0000 UTC Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.823983 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.824044 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.824061 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.824085 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.824104 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.926851 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.926940 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.926959 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.926985 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:24 crc kubenswrapper[4845]: I0202 10:33:24.927004 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:24Z","lastTransitionTime":"2026-02-02T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.029658 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.029724 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.029741 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.029764 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.029782 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.132219 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.132256 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.132266 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.132282 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.132295 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.234872 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.235002 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.235021 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.235056 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.235091 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.338297 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.338338 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.338346 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.338359 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.338367 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.441505 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.441569 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.441587 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.441611 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.441628 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.544271 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.544306 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.544316 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.544329 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.544337 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.647752 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.647808 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.647827 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.647881 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.647935 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.712397 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:25 crc kubenswrapper[4845]: E0202 10:33:25.712570 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.712837 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:25 crc kubenswrapper[4845]: E0202 10:33:25.712976 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.713344 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:25 crc kubenswrapper[4845]: E0202 10:33:25.713547 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.749281 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:20:02.168759933 +0000 UTC Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.750968 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.751026 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.751037 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.751055 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.751067 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.853779 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.853807 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.853817 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.853833 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.853844 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.956268 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.956332 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.956352 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.956380 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:25 crc kubenswrapper[4845]: I0202 10:33:25.956401 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:25Z","lastTransitionTime":"2026-02-02T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.058625 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.058676 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.058693 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.058713 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.058730 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.160789 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.160842 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.160858 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.160880 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.160950 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.263666 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.263727 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.263743 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.263768 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.263785 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.366383 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.366711 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.366805 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.366916 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.367002 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.470258 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.470617 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.470714 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.470799 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.470913 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.574189 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.574226 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.574234 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.574248 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.574257 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.678412 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.678499 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.678534 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.678582 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.678607 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.711837 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:26 crc kubenswrapper[4845]: E0202 10:33:26.712121 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.749720 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:07:19.283324304 +0000 UTC Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.781452 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.781650 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.781673 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.781730 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.781752 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.884731 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.884804 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.884821 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.884850 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.884870 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.989369 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.989448 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.989481 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.989513 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:26 crc kubenswrapper[4845]: I0202 10:33:26.989537 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:26Z","lastTransitionTime":"2026-02-02T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.092789 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.092846 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.092868 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.092924 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.092948 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.195524 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.195604 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.195633 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.195667 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.195687 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.298208 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.298265 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.298282 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.298304 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.298321 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.405492 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.405561 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.405590 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.405621 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.405646 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.508421 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.508488 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.508528 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.508559 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.508585 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.612088 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.612129 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.612138 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.612151 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.612162 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.714097 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.714297 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.714387 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:27 crc kubenswrapper[4845]: E0202 10:33:27.714622 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:27 crc kubenswrapper[4845]: E0202 10:33:27.714827 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:27 crc kubenswrapper[4845]: E0202 10:33:27.714972 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.716632 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.716684 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.716713 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.716738 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.716755 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.750532 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:26:55.169598552 +0000 UTC Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.819935 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.820036 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.820054 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.820076 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.820092 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.971934 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.972064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.972085 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.972113 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:27 crc kubenswrapper[4845]: I0202 10:33:27.972133 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:27Z","lastTransitionTime":"2026-02-02T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.074551 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.074590 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.074601 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.074615 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.074628 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.178157 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.178255 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.178274 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.178691 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.178963 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.280668 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.280721 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.280743 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.280769 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.280786 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: E0202 10:33:28.302645 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.307994 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.308053 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.308069 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.308093 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.308110 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: E0202 10:33:28.326981 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.331989 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.332048 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.332068 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.332092 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.332109 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: E0202 10:33:28.352774 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.358179 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.358253 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.358271 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.358297 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.358314 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: E0202 10:33:28.379465 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.384727 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.384788 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.384806 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.384830 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.384848 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: E0202 10:33:28.405278 4845 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8f18ce78-9cc3-4dbd-9d49-5987790a156d\\\",\\\"systemUUID\\\":\\\"a0f7ad40-dfc7-4c48-b08f-9dc9799ca728\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:28Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:28 crc kubenswrapper[4845]: E0202 10:33:28.405610 4845 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.407699 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.407761 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.407785 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.407813 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.407838 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.510990 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.511045 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.511062 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.511084 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.511101 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.614319 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.614369 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.614390 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.614417 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.614435 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.711635 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:28 crc kubenswrapper[4845]: E0202 10:33:28.711823 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.717683 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.717744 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.717763 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.717786 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.717804 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.751133 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:21:55.246155615 +0000 UTC Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.820833 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.820878 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.820909 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.820927 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.820941 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.923664 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.923745 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.923766 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.923790 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:28 crc kubenswrapper[4845]: I0202 10:33:28.923823 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:28Z","lastTransitionTime":"2026-02-02T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.026662 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.026715 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.026733 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.026755 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.026772 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.130262 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.130321 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.130339 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.130365 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.130383 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.233415 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.233486 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.233497 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.233516 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.233527 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.337001 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.337064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.337081 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.337105 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.337123 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.439547 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.439605 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.439618 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.439638 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.439652 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.542761 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.542826 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.542843 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.542867 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.542919 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.645610 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.645660 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.645672 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.645689 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.645706 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.712403 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.712414 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:29 crc kubenswrapper[4845]: E0202 10:33:29.712637 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.712681 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:29 crc kubenswrapper[4845]: E0202 10:33:29.712811 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:29 crc kubenswrapper[4845]: E0202 10:33:29.712963 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.733550 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.750335 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.750398 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.750414 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.750437 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.750455 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.751475 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:35:15.860076465 +0000 UTC Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.754774 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.779724 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-thbz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01f1334f-a21c-4487-a1d6-dbecf7017c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1c279629fb253d48dc16d3135455acf6898e13d9c6c1abdbbba6745bf444271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e833c4ffa4adc1ef1e3436117fc1c7bfc71ccdaa6a33138f70316fa4aa148c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94f90971c998191dab0c4c9bd5c32505ad7d59936f1278bbeadca6d396515e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f687cea8c04a18db685e70c1f08a6a39c133fd1457894abbc30378dc55451e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e844982b527eb7d6e332e02b3103b17356b972ae995785c2b77c4da9d4cfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d80fba12e96c4d6bf74ce0f6eaa569bcc36fbe1f37ab194a1349a00647373c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a0c97b12b3f63a2cf8e758064a8f223db848e8e59b7ae091a1c7f6c6cfaf233\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9h9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-thbz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.797571 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v5d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pmn9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.818417 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc2b1ae-676c-4b63-98d0-074ca1549855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de557b3db542f15304bf4040cc3d6cb977653435b5554a5997017bddd99f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a90efd39510f9e697dda93f9d9df9d66fc4939524d5f63dac2de69eb631be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb077b7b9ee7a19d612c668b38abae9c19bc0b77fb1d0e6ead7456b9e4f1778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc810e75835737db586cd41987af98c1b2aaf409fc667b10b0c793f925e1ce20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.830638 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0555b7db-8395-4133-bcbb-cc7b913aa8d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77147984edf194643baed82a9c8ff33e195c8f0fd949be89036668931346dc84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970e304480db2ba449cc9b741e5e43e49e2b4644b1d7361f680de0487fb489cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.845252 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzwst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:33:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:08Z\\\",\\\"message\\\":\\\"2026-02-02T10:32:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574\\\\n2026-02-02T10:32:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b74a23ed-1be3-4ccf-8e01-88593520f574 to /host/opt/cni/bin/\\\\n2026-02-02T10:32:23Z [verbose] multus-daemon started\\\\n2026-02-02T10:32:23Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:33:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ptl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzwst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.852584 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.852624 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.852634 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.852650 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.852662 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.856540 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf2f253-531f-4835-84c1-928680352f7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba67ef020faaba1b0de9261d9c81a2fb5f7150fd656146120be915d505b9690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc8c6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wnn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.869269 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xdtrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c7aea7-01dc-4f6d-ab41-94447a76fd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://520376a20ac322cf8b06027f9eaf3ac45b420bc59dc146e0a0f7d607e4e8b4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f24s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xdtrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.885503 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67f40dda-fb6a-490a-86a1-a14d6b183c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://286c1a22bddd9b49ba922e96d6189249be4b138ad1cbddaa0900380c3ac74fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b76ba08a019ae3c5baf8eb38326be46e924ae188878727c70a3cdd922978df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvdnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2rmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.902478 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70456ccb7609e2b329f3354b6bf22567cf006def4b2d6d3c977a5557610255b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.914855 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzb6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b94e8620-d850-4036-b311-42b2a6369c73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56fd0a9441c1ef3ad3648d904b687c0047546fced1f41cdbd684e9a459274ca1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5szs4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzb6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.928912 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51975473c1a580da686a24af479ba6bdf021aa18374e7a5a5a25697f0e7c1c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4c91dd525a59761c418dc5d0da4726c8fb020132487234982f56b7b1baa997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.949864 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:33:15Z\\\",\\\"message\\\":\\\":303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771107 7022 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771126 7022 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd in node crc\\\\nI0202 10:33:15.771141 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-sh5vd after 0 failed attempt(s)\\\\nI0202 10:33:15.771152 7022 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-sh5vd\\\\nI0202 10:33:15.771151 7022 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-xdtrh\\\\nI0202 10:33:15.771173 7022 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-xdtrh\\\\nI0202 10:33:15.771184 7022 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-xdtrh in node crc\\\\nF0202 10:33:15.771184 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped alrea\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:33:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:32:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sh5vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.955942 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.956009 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.956029 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.956057 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.956076 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:29Z","lastTransitionTime":"2026-02-02T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.974842 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734a376-782c-48c5-a900-6848db158367\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbaa3d016ffb0dcfac62822079d7d6107d9c83d148fced65d10a499ecc59a209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e16c171df9a3b92833d4b9ce6a266e61e1e395be8718a8f0ad17d91e2f81bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e233a2094abb67e4681a879c9833ebfab622f3d474c626eedb84c64bfd472267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7c63b5e4c1ff659c55062d96136f9e1da6aa5b387b9a5c827a214a08d20b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd643db8d775b30f6af31b66cbf14a3c69d5f905b259f8cc7278cdfdbb47fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b28b6ef2595e94d53201f95e0ee15ca846f46f180304e060e81c80b286594676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c82afeaee19739204c15badd7f3c696894d6a55ad3864e5fe8f0ffe8513abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1292bccaf8d5a2245d414fceb61f4d986778c4609b3809b6242afe6eae000db3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:29 crc kubenswrapper[4845]: I0202 10:33:29.994820 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:32:19.778605 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:32:19.778698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:32:19.779308 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1786026333/tls.crt::/tmp/serving-cert-1786026333/tls.key\\\\\\\"\\\\nI0202 10:32:20.300305 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:32:20.306387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:32:20.306405 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:32:20.306428 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:32:20.306438 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:32:20.316852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:32:20.316876 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:32:20.316899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:32:20.316904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:32:20.316907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:32:20.316910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:32:20.317037 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:32:20.318666 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.013704 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.032971 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ebd4f2-07f1-4feb-891d-90a28f6bd42b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e142c14a88e83ff651b3bc885d4a686a80009bcb2f8e462676f985525a22a13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555c98f1dd5a685f181dc7a501212953659b1f17682e20c8240891b78572f8ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c63b9e73c0108a15bfbdab8ad846fecd1881b4cbe8efe70d4b0590c77d028e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:31:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.047511 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:32:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dcfcbbb0e3f0e6e8d42cc155396b11719e721ede6a85209e69ac2eede1ac353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:33:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.059224 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.059292 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.059313 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.059344 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.059373 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.162652 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.162716 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.162727 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.162745 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.162759 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.265444 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.265505 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.265524 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.265548 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.265570 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.369125 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.369197 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.369243 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.369278 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.369305 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.473479 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.473541 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.473567 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.473598 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.473621 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.576663 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.576715 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.576727 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.576745 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.576758 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.678699 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.678739 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.678748 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.678762 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.678770 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.712251 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:30 crc kubenswrapper[4845]: E0202 10:33:30.712405 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.752638 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:05:58.522633834 +0000 UTC Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.782306 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.782372 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.782644 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.782670 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.782688 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.885917 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.885978 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.885996 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.886021 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.886039 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.988669 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.989102 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.989201 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.989293 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:30 crc kubenswrapper[4845]: I0202 10:33:30.989372 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:30Z","lastTransitionTime":"2026-02-02T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.092643 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.093028 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.093041 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.093097 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.093112 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.195627 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.195693 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.195710 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.195736 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.195753 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.298157 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.298508 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.298600 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.298689 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.298775 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.401718 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.401771 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.401796 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.401824 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.401849 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.504287 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.504336 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.504353 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.504377 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.504399 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.607622 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.607692 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.607714 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.607737 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.607753 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.710258 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.710324 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.710350 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.710380 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.710402 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.712548 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.712561 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:31 crc kubenswrapper[4845]: E0202 10:33:31.712793 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.713194 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:31 crc kubenswrapper[4845]: E0202 10:33:31.713251 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:31 crc kubenswrapper[4845]: E0202 10:33:31.713406 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.714330 4845 scope.go:117] "RemoveContainer" containerID="b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a" Feb 02 10:33:31 crc kubenswrapper[4845]: E0202 10:33:31.714569 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.753854 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:18:26.305947938 +0000 UTC Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.813524 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.813560 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.813571 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.813586 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.813599 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.917575 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.917630 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.917647 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.917671 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:31 crc kubenswrapper[4845]: I0202 10:33:31.917687 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:31Z","lastTransitionTime":"2026-02-02T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.020453 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.020515 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.020532 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.020554 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.020572 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.125300 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.125360 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.125378 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.125402 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.125419 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.229085 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.229135 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.229147 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.229164 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.229176 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.331721 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.331766 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.331800 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.331820 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.331831 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.435282 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.435347 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.435366 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.435392 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.435409 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.538838 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.539317 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.539471 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.539648 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.539784 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.642130 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.642182 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.642194 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.642213 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.642226 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.712101 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:32 crc kubenswrapper[4845]: E0202 10:33:32.712286 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.744517 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.744584 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.744601 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.744624 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.744785 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.754946 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:24:28.201500379 +0000 UTC Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.847590 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.847660 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.847685 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.847753 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.847774 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.951067 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.951118 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.951136 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.951159 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:32 crc kubenswrapper[4845]: I0202 10:33:32.951175 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:32Z","lastTransitionTime":"2026-02-02T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.054539 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.054584 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.054596 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.054611 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.054620 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.158337 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.158406 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.158425 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.158451 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.158470 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.262275 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.262311 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.262321 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.262336 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.262345 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.365447 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.365508 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.365525 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.365550 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.365567 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.468714 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.468788 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.468812 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.468834 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.468852 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.572213 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.572285 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.572296 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.572312 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.572322 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.675006 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.675084 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.675104 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.675130 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.675149 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.712494 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.712537 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:33 crc kubenswrapper[4845]: E0202 10:33:33.712698 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.712793 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:33 crc kubenswrapper[4845]: E0202 10:33:33.713020 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:33 crc kubenswrapper[4845]: E0202 10:33:33.713106 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.755127 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:44:21.240965445 +0000 UTC Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.778211 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.778279 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.778301 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.778325 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.778342 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.881313 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.881387 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.881406 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.881430 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.881446 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.984981 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.985034 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.985051 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.985075 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:33 crc kubenswrapper[4845]: I0202 10:33:33.985092 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:33Z","lastTransitionTime":"2026-02-02T10:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.088182 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.088261 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.088299 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.088330 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.088358 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.191540 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.191608 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.191625 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.191649 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.191667 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.294353 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.294432 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.294449 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.294473 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.294504 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.397578 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.397618 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.397630 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.397646 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.397657 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.501142 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.501196 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.501209 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.501229 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.501242 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.603954 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.604099 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.604120 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.604192 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.604221 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.707244 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.707672 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.707838 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.708111 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.708296 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.712596 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:34 crc kubenswrapper[4845]: E0202 10:33:34.712976 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.755529 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:30:20.510948105 +0000 UTC Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.811168 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.811223 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.811241 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.811264 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.811282 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.914200 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.914306 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.914321 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.914337 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:34 crc kubenswrapper[4845]: I0202 10:33:34.914350 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:34Z","lastTransitionTime":"2026-02-02T10:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.017827 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.018343 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.018438 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.018526 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.018599 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.140008 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.140096 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.140118 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.140145 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.140164 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.244161 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.244205 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.244279 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.244307 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.244324 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.347141 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.347195 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.347209 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.347232 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.347250 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.449930 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.450004 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.450028 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.450057 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.450079 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.552614 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.552731 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.552758 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.552787 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.552806 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.655307 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.655374 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.655392 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.655418 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.655435 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.712336 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.712354 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:35 crc kubenswrapper[4845]: E0202 10:33:35.713036 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.712424 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:35 crc kubenswrapper[4845]: E0202 10:33:35.713208 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:35 crc kubenswrapper[4845]: E0202 10:33:35.712849 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.755668 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:50:06.897482429 +0000 UTC Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.758183 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.758345 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.758366 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.758389 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.758425 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.861840 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.861920 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.861938 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.861961 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.861978 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.965343 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.965432 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.965464 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.965544 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:35 crc kubenswrapper[4845]: I0202 10:33:35.965605 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:35Z","lastTransitionTime":"2026-02-02T10:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.069164 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.069224 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.069240 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.069264 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.069283 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.172641 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.172710 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.172727 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.172750 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.172773 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.276057 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.276114 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.276134 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.276156 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.276175 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.379288 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.379365 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.379384 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.379411 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.379429 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.483249 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.483325 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.483348 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.483381 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.483400 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.587064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.587159 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.587180 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.587212 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.587234 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.690064 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.690100 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.690111 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.690125 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.690137 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.712499 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:36 crc kubenswrapper[4845]: E0202 10:33:36.712669 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.756300 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:59:56.883911131 +0000 UTC Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.792530 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.792575 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.792592 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.792616 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.792633 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.895716 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.895777 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.895798 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.895826 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.895849 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.998257 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.998302 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.998313 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.998331 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:36 crc kubenswrapper[4845]: I0202 10:33:36.998343 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:36Z","lastTransitionTime":"2026-02-02T10:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.101727 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.101764 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.101782 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.101802 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.101813 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.204745 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.204809 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.204829 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.204857 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.204874 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.307017 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.307081 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.307099 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.307129 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.307146 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.410345 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.410382 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.410392 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.410407 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.410418 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.513318 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.513758 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.513912 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.514055 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.514182 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.616747 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.617043 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.617137 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.617212 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.617295 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.712631 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:37 crc kubenswrapper[4845]: E0202 10:33:37.713052 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.712683 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:37 crc kubenswrapper[4845]: E0202 10:33:37.713312 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.712632 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:37 crc kubenswrapper[4845]: E0202 10:33:37.713735 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.720936 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.721003 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.721022 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.721049 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.721068 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.756820 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:37:36.887498072 +0000 UTC Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.824390 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.824429 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.824437 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.824470 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.824480 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.927412 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.927480 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.927506 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.927535 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:37 crc kubenswrapper[4845]: I0202 10:33:37.927556 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:37Z","lastTransitionTime":"2026-02-02T10:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.031588 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.031719 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.031745 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.031774 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.031797 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:38Z","lastTransitionTime":"2026-02-02T10:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.135072 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.135497 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.135731 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.135949 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.136274 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:38Z","lastTransitionTime":"2026-02-02T10:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.239125 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.239189 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.239206 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.239230 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.239250 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:38Z","lastTransitionTime":"2026-02-02T10:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.342277 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.342373 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.342432 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.342454 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.342510 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:38Z","lastTransitionTime":"2026-02-02T10:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.445337 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.445710 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.445867 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.446121 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.446263 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:38Z","lastTransitionTime":"2026-02-02T10:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.523246 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.523315 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.523332 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.523358 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.523378 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:38Z","lastTransitionTime":"2026-02-02T10:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.564307 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.564382 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.564399 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.564427 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.564445 4845 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:33:38Z","lastTransitionTime":"2026-02-02T10:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.593684 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw"] Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.594415 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.598475 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.598604 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.598693 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.598779 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.618564 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.618480886 podStartE2EDuration="45.618480886s" podCreationTimestamp="2026-02-02 10:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.617134045 +0000 UTC m=+99.708535495" watchObservedRunningTime="2026-02-02 10:33:38.618480886 +0000 UTC m=+99.709882386" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.651649 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.651627263 podStartE2EDuration="31.651627263s" podCreationTimestamp="2026-02-02 10:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.63343465 +0000 UTC m=+99.724836100" watchObservedRunningTime="2026-02-02 10:33:38.651627263 +0000 UTC m=+99.743028713" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.683496 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.684051 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/948fb07c-2db9-45aa-805b-5ba192aae967-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.684220 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/948fb07c-2db9-45aa-805b-5ba192aae967-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.684413 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/948fb07c-2db9-45aa-805b-5ba192aae967-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.684734 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/948fb07c-2db9-45aa-805b-5ba192aae967-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.684944 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/948fb07c-2db9-45aa-805b-5ba192aae967-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: E0202 10:33:38.684003 4845 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:33:38 crc kubenswrapper[4845]: E0202 10:33:38.685329 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs podName:84cb7b66-62e7-4012-ab80-7c5e6ba51e35 nodeName:}" failed. No retries permitted until 2026-02-02 10:34:42.685302176 +0000 UTC m=+163.776703656 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs") pod "network-metrics-daemon-pmn9h" (UID: "84cb7b66-62e7-4012-ab80-7c5e6ba51e35") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.711393 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-thbz4" podStartSLOduration=78.711366909 podStartE2EDuration="1m18.711366909s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.695033162 +0000 UTC m=+99.786434652" watchObservedRunningTime="2026-02-02 10:33:38.711366909 +0000 UTC m=+99.802768369" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.712015 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:38 crc kubenswrapper[4845]: E0202 10:33:38.712148 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.734102 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rzb6b" podStartSLOduration=79.734083599 podStartE2EDuration="1m19.734083599s" podCreationTimestamp="2026-02-02 10:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.733908564 +0000 UTC m=+99.825310034" watchObservedRunningTime="2026-02-02 10:33:38.734083599 +0000 UTC m=+99.825485049" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.757085 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:12:22.327232456 +0000 UTC Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.757154 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.766949 4845 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.770775 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kzwst" podStartSLOduration=78.770756514 podStartE2EDuration="1m18.770756514s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.75518147 +0000 UTC m=+99.846582930" watchObservedRunningTime="2026-02-02 10:33:38.770756514 +0000 UTC m=+99.862157964" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.771047 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podStartSLOduration=78.771042372 podStartE2EDuration="1m18.771042372s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.770527367 +0000 UTC m=+99.861928827" watchObservedRunningTime="2026-02-02 10:33:38.771042372 +0000 UTC m=+99.862443822" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.785494 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xdtrh" podStartSLOduration=78.7854619 podStartE2EDuration="1m18.7854619s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.785096659 +0000 UTC m=+99.876498119" watchObservedRunningTime="2026-02-02 10:33:38.7854619 +0000 UTC m=+99.876863390" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.785602 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/948fb07c-2db9-45aa-805b-5ba192aae967-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.785638 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/948fb07c-2db9-45aa-805b-5ba192aae967-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.785694 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/948fb07c-2db9-45aa-805b-5ba192aae967-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.785730 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/948fb07c-2db9-45aa-805b-5ba192aae967-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.785753 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/948fb07c-2db9-45aa-805b-5ba192aae967-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.786248 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/948fb07c-2db9-45aa-805b-5ba192aae967-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.786705 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/948fb07c-2db9-45aa-805b-5ba192aae967-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.786756 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/948fb07c-2db9-45aa-805b-5ba192aae967-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.801841 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/948fb07c-2db9-45aa-805b-5ba192aae967-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.809661 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/948fb07c-2db9-45aa-805b-5ba192aae967-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwtsw\" (UID: \"948fb07c-2db9-45aa-805b-5ba192aae967\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.838004 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2rmk" podStartSLOduration=77.837977366 podStartE2EDuration="1m17.837977366s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.803956792 +0000 UTC m=+99.895358262" watchObservedRunningTime="2026-02-02 10:33:38.837977366 +0000 UTC m=+99.929378846" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.838291 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=75.838283986 podStartE2EDuration="1m15.838283986s" podCreationTimestamp="2026-02-02 10:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.837290806 +0000 UTC m=+99.928692266" watchObservedRunningTime="2026-02-02 10:33:38.838283986 +0000 UTC m=+99.929685466" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.875747 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.875727704 podStartE2EDuration="1m18.875727704s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.875098005 +0000 UTC m=+99.966499485" watchObservedRunningTime="2026-02-02 10:33:38.875727704 +0000 UTC m=+99.967129164" Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.913378 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" Feb 02 10:33:38 crc kubenswrapper[4845]: W0202 10:33:38.928741 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod948fb07c_2db9_45aa_805b_5ba192aae967.slice/crio-43fd53f2bc5b1f32ce78c098d90a6eda1133a88b8b776b2c364e668f6421d332 WatchSource:0}: Error finding container 43fd53f2bc5b1f32ce78c098d90a6eda1133a88b8b776b2c364e668f6421d332: Status 404 returned error can't find the container with id 43fd53f2bc5b1f32ce78c098d90a6eda1133a88b8b776b2c364e668f6421d332 Feb 02 10:33:38 crc kubenswrapper[4845]: I0202 10:33:38.972789 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.972770583 podStartE2EDuration="1m12.972770583s" podCreationTimestamp="2026-02-02 10:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:38.954414335 +0000 UTC m=+100.045815795" watchObservedRunningTime="2026-02-02 10:33:38.972770583 +0000 UTC m=+100.064172043" Feb 02 10:33:39 crc kubenswrapper[4845]: I0202 10:33:39.327116 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" event={"ID":"948fb07c-2db9-45aa-805b-5ba192aae967","Type":"ContainerStarted","Data":"b85737c6a0c98af12e2cb2b520e560c55c207c7ccfaf7439fb5c84df84ee0147"} Feb 02 10:33:39 crc kubenswrapper[4845]: I0202 10:33:39.327172 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" event={"ID":"948fb07c-2db9-45aa-805b-5ba192aae967","Type":"ContainerStarted","Data":"43fd53f2bc5b1f32ce78c098d90a6eda1133a88b8b776b2c364e668f6421d332"} Feb 02 10:33:39 crc kubenswrapper[4845]: I0202 10:33:39.712464 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:39 crc kubenswrapper[4845]: I0202 10:33:39.712476 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:39 crc kubenswrapper[4845]: I0202 10:33:39.712988 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:39 crc kubenswrapper[4845]: E0202 10:33:39.713372 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:39 crc kubenswrapper[4845]: E0202 10:33:39.713443 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:39 crc kubenswrapper[4845]: E0202 10:33:39.713520 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:40 crc kubenswrapper[4845]: I0202 10:33:40.711766 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:40 crc kubenswrapper[4845]: E0202 10:33:40.713678 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:41 crc kubenswrapper[4845]: I0202 10:33:41.712243 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:41 crc kubenswrapper[4845]: I0202 10:33:41.712264 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:41 crc kubenswrapper[4845]: E0202 10:33:41.712937 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:41 crc kubenswrapper[4845]: E0202 10:33:41.713129 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:41 crc kubenswrapper[4845]: I0202 10:33:41.712334 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:41 crc kubenswrapper[4845]: E0202 10:33:41.713459 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:42 crc kubenswrapper[4845]: I0202 10:33:42.712652 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:42 crc kubenswrapper[4845]: E0202 10:33:42.712871 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:42 crc kubenswrapper[4845]: I0202 10:33:42.714159 4845 scope.go:117] "RemoveContainer" containerID="b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a" Feb 02 10:33:42 crc kubenswrapper[4845]: E0202 10:33:42.714431 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sh5vd_openshift-ovn-kubernetes(7b93b041-3f3f-47ba-a9d4-d09de1b326dc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" Feb 02 10:33:43 crc kubenswrapper[4845]: I0202 10:33:43.712379 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:43 crc kubenswrapper[4845]: I0202 10:33:43.712454 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:43 crc kubenswrapper[4845]: I0202 10:33:43.712405 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:43 crc kubenswrapper[4845]: E0202 10:33:43.712613 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:43 crc kubenswrapper[4845]: E0202 10:33:43.712744 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:43 crc kubenswrapper[4845]: E0202 10:33:43.712870 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:44 crc kubenswrapper[4845]: I0202 10:33:44.711986 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:44 crc kubenswrapper[4845]: E0202 10:33:44.712214 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:45 crc kubenswrapper[4845]: I0202 10:33:45.713363 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:45 crc kubenswrapper[4845]: I0202 10:33:45.713445 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:45 crc kubenswrapper[4845]: I0202 10:33:45.713470 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:45 crc kubenswrapper[4845]: E0202 10:33:45.713601 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:45 crc kubenswrapper[4845]: E0202 10:33:45.713717 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:45 crc kubenswrapper[4845]: E0202 10:33:45.714292 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:46 crc kubenswrapper[4845]: I0202 10:33:46.711728 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:46 crc kubenswrapper[4845]: E0202 10:33:46.712169 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:47 crc kubenswrapper[4845]: I0202 10:33:47.712641 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:47 crc kubenswrapper[4845]: I0202 10:33:47.712713 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:47 crc kubenswrapper[4845]: E0202 10:33:47.712874 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:47 crc kubenswrapper[4845]: I0202 10:33:47.712936 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:47 crc kubenswrapper[4845]: E0202 10:33:47.713024 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:47 crc kubenswrapper[4845]: E0202 10:33:47.713126 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:48 crc kubenswrapper[4845]: I0202 10:33:48.712494 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:48 crc kubenswrapper[4845]: E0202 10:33:48.713020 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:49 crc kubenswrapper[4845]: I0202 10:33:49.711837 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:49 crc kubenswrapper[4845]: I0202 10:33:49.711980 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:49 crc kubenswrapper[4845]: I0202 10:33:49.714759 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:49 crc kubenswrapper[4845]: E0202 10:33:49.714739 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:49 crc kubenswrapper[4845]: E0202 10:33:49.714986 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:49 crc kubenswrapper[4845]: E0202 10:33:49.715098 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:50 crc kubenswrapper[4845]: I0202 10:33:50.712645 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:50 crc kubenswrapper[4845]: E0202 10:33:50.712854 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:51 crc kubenswrapper[4845]: I0202 10:33:51.712050 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:51 crc kubenswrapper[4845]: I0202 10:33:51.712078 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:51 crc kubenswrapper[4845]: E0202 10:33:51.712246 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:51 crc kubenswrapper[4845]: I0202 10:33:51.712325 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:51 crc kubenswrapper[4845]: E0202 10:33:51.712447 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:51 crc kubenswrapper[4845]: E0202 10:33:51.712525 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:52 crc kubenswrapper[4845]: I0202 10:33:52.712440 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:52 crc kubenswrapper[4845]: E0202 10:33:52.712620 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:53 crc kubenswrapper[4845]: I0202 10:33:53.712272 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:53 crc kubenswrapper[4845]: E0202 10:33:53.712630 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:53 crc kubenswrapper[4845]: I0202 10:33:53.712663 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:53 crc kubenswrapper[4845]: E0202 10:33:53.712803 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:53 crc kubenswrapper[4845]: I0202 10:33:53.712850 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:53 crc kubenswrapper[4845]: E0202 10:33:53.712955 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:54 crc kubenswrapper[4845]: I0202 10:33:54.712203 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:54 crc kubenswrapper[4845]: E0202 10:33:54.712359 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.381602 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/1.log" Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.382341 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/0.log" Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.382396 4845 generic.go:334] "Generic (PLEG): container finished" podID="310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3" containerID="a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f" exitCode=1 Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.382435 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzwst" event={"ID":"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3","Type":"ContainerDied","Data":"a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f"} Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.382484 4845 scope.go:117] "RemoveContainer" containerID="2b7e296528c0b05e1fec124320c451f0dbd9fb5fb76cb9e349019fd2d982de3a" Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.383059 4845 scope.go:117] "RemoveContainer" containerID="a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f" Feb 02 10:33:55 crc kubenswrapper[4845]: E0202 10:33:55.385202 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kzwst_openshift-multus(310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3)\"" pod="openshift-multus/multus-kzwst" podUID="310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3" Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.412013 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwtsw" podStartSLOduration=95.411994606 podStartE2EDuration="1m35.411994606s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:39.348617554 +0000 UTC m=+100.440019044" watchObservedRunningTime="2026-02-02 10:33:55.411994606 +0000 UTC m=+116.503396056" Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.711956 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.711994 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:55 crc kubenswrapper[4845]: E0202 10:33:55.712084 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:55 crc kubenswrapper[4845]: E0202 10:33:55.712243 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:55 crc kubenswrapper[4845]: I0202 10:33:55.712788 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:55 crc kubenswrapper[4845]: E0202 10:33:55.713244 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:56 crc kubenswrapper[4845]: I0202 10:33:56.388643 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/1.log" Feb 02 10:33:56 crc kubenswrapper[4845]: I0202 10:33:56.711852 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:56 crc kubenswrapper[4845]: E0202 10:33:56.712082 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:56 crc kubenswrapper[4845]: I0202 10:33:56.713262 4845 scope.go:117] "RemoveContainer" containerID="b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a" Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.393907 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/3.log" Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.396666 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerStarted","Data":"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0"} Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.397276 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.430822 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podStartSLOduration=97.430799539 podStartE2EDuration="1m37.430799539s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:33:57.430629104 +0000 UTC m=+118.522030594" watchObservedRunningTime="2026-02-02 10:33:57.430799539 +0000 UTC m=+118.522200999" Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.531792 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pmn9h"] Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.531919 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:57 crc kubenswrapper[4845]: E0202 10:33:57.532006 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.711675 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.711734 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:57 crc kubenswrapper[4845]: I0202 10:33:57.711795 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:57 crc kubenswrapper[4845]: E0202 10:33:57.712005 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:57 crc kubenswrapper[4845]: E0202 10:33:57.712123 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:57 crc kubenswrapper[4845]: E0202 10:33:57.712270 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:59 crc kubenswrapper[4845]: E0202 10:33:59.701405 4845 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 10:33:59 crc kubenswrapper[4845]: I0202 10:33:59.711835 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:33:59 crc kubenswrapper[4845]: E0202 10:33:59.713127 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:33:59 crc kubenswrapper[4845]: I0202 10:33:59.713205 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:33:59 crc kubenswrapper[4845]: I0202 10:33:59.713205 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:33:59 crc kubenswrapper[4845]: E0202 10:33:59.713303 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:33:59 crc kubenswrapper[4845]: I0202 10:33:59.713222 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:33:59 crc kubenswrapper[4845]: E0202 10:33:59.713410 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:33:59 crc kubenswrapper[4845]: E0202 10:33:59.713452 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:33:59 crc kubenswrapper[4845]: E0202 10:33:59.843972 4845 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:34:01 crc kubenswrapper[4845]: I0202 10:34:01.711956 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:01 crc kubenswrapper[4845]: I0202 10:34:01.711992 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:01 crc kubenswrapper[4845]: I0202 10:34:01.712052 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:01 crc kubenswrapper[4845]: E0202 10:34:01.712125 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:34:01 crc kubenswrapper[4845]: I0202 10:34:01.712227 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:01 crc kubenswrapper[4845]: E0202 10:34:01.712482 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:34:01 crc kubenswrapper[4845]: E0202 10:34:01.712602 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:34:01 crc kubenswrapper[4845]: E0202 10:34:01.712695 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:34:03 crc kubenswrapper[4845]: I0202 10:34:03.712473 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:03 crc kubenswrapper[4845]: I0202 10:34:03.712552 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:03 crc kubenswrapper[4845]: E0202 10:34:03.712673 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:34:03 crc kubenswrapper[4845]: I0202 10:34:03.712752 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:03 crc kubenswrapper[4845]: E0202 10:34:03.712925 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:34:03 crc kubenswrapper[4845]: E0202 10:34:03.713057 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:34:03 crc kubenswrapper[4845]: I0202 10:34:03.713246 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:03 crc kubenswrapper[4845]: E0202 10:34:03.713362 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:34:04 crc kubenswrapper[4845]: E0202 10:34:04.845077 4845 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:34:05 crc kubenswrapper[4845]: I0202 10:34:05.712202 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:05 crc kubenswrapper[4845]: E0202 10:34:05.712616 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:34:05 crc kubenswrapper[4845]: I0202 10:34:05.712693 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:05 crc kubenswrapper[4845]: I0202 10:34:05.712673 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:05 crc kubenswrapper[4845]: I0202 10:34:05.712715 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:05 crc kubenswrapper[4845]: E0202 10:34:05.713113 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:34:05 crc kubenswrapper[4845]: E0202 10:34:05.713370 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:34:05 crc kubenswrapper[4845]: E0202 10:34:05.713526 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:34:06 crc kubenswrapper[4845]: I0202 10:34:06.712366 4845 scope.go:117] "RemoveContainer" containerID="a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f" Feb 02 10:34:07 crc kubenswrapper[4845]: I0202 10:34:07.434052 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/1.log" Feb 02 10:34:07 crc kubenswrapper[4845]: I0202 10:34:07.434512 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzwst" event={"ID":"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3","Type":"ContainerStarted","Data":"276b266e719606f3b154e5d01310e24320f7c03059107da4aad492d36a95867b"} Feb 02 10:34:07 crc kubenswrapper[4845]: I0202 10:34:07.711613 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:07 crc kubenswrapper[4845]: I0202 10:34:07.711625 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:07 crc kubenswrapper[4845]: E0202 10:34:07.711852 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:34:07 crc kubenswrapper[4845]: I0202 10:34:07.711874 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:07 crc kubenswrapper[4845]: I0202 10:34:07.712005 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:07 crc kubenswrapper[4845]: E0202 10:34:07.712120 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:34:07 crc kubenswrapper[4845]: E0202 10:34:07.712224 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:34:07 crc kubenswrapper[4845]: E0202 10:34:07.712787 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:34:09 crc kubenswrapper[4845]: I0202 10:34:09.712122 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:09 crc kubenswrapper[4845]: I0202 10:34:09.712205 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:09 crc kubenswrapper[4845]: I0202 10:34:09.712217 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:09 crc kubenswrapper[4845]: I0202 10:34:09.715083 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:09 crc kubenswrapper[4845]: E0202 10:34:09.715163 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:34:09 crc kubenswrapper[4845]: E0202 10:34:09.715332 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pmn9h" podUID="84cb7b66-62e7-4012-ab80-7c5e6ba51e35" Feb 02 10:34:09 crc kubenswrapper[4845]: E0202 10:34:09.715017 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:34:09 crc kubenswrapper[4845]: E0202 10:34:09.715473 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.711817 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.711842 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.711954 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.711963 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.716530 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.717002 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.717027 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.717263 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.719918 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 10:34:11 crc kubenswrapper[4845]: I0202 10:34:11.720252 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 10:34:17 crc kubenswrapper[4845]: I0202 10:34:17.849534 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.858554 4845 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.909372 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.910785 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.916502 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xk8gn"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.923551 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.923980 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.927002 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.945837 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.946412 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.946912 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.947811 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xsdsh"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.948542 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mvp5t"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.948835 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.948616 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.949690 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.950445 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.951110 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.952319 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.955503 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4rbqr"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.956163 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4rbqr" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.956617 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.958362 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.958583 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.958600 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.958369 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.958651 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.961566 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.961671 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.961772 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.962134 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.962411 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.962456 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.962746 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.963102 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.969323 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6szh7"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.969848 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.970759 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.970971 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.971118 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.971276 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.971377 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.971471 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.975185 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.975779 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.975991 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.976223 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.977302 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.977973 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978116 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978122 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978274 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978364 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978471 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978508 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978210 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978688 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978477 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.978237 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.979071 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.980076 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.980206 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.980351 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.980586 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.980832 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.981580 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.983352 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.984139 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.984339 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.984638 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.984780 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.984938 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.985475 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.985845 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.986059 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.986208 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.986206 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z9qjh"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.986406 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.986566 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.987088 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.987284 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.987470 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.987811 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.989124 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8"] Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.989417 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.990036 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.990118 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.990309 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.990561 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.990706 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.990866 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.991068 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 10:34:19 crc kubenswrapper[4845]: I0202 10:34:19.991247 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.015994 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.016177 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.020826 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.021139 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8gjpm"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.024115 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w989s"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.024214 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.025088 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.032624 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.033167 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.033551 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.033610 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.033820 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.034019 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.034193 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.034267 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.034324 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.034422 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.034511 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.035255 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.035778 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.035904 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.036350 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.039072 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.040625 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.040777 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.040806 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.041178 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.041208 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.041103 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-thf72"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.041462 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.042008 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.042063 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.042493 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.042575 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pt97w"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.043061 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.043397 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044024 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044078 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044116 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044258 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044272 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044339 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044404 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044451 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044525 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044605 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.044826 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.052473 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054229 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac875c91-285a-420b-9065-50af53ab50d3-serving-cert\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054264 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-serving-cert\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054291 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/611eb8a8-5fc3-4325-96be-1dba1144259b-trusted-ca\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054315 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66ae9a2f-1c24-4a65-b961-bd9431c667f6-node-pullsecrets\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054338 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-config\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054360 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-config\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054377 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054404 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk9rv\" (UniqueName: \"kubernetes.io/projected/66ae9a2f-1c24-4a65-b961-bd9431c667f6-kube-api-access-fk9rv\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054425 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6b8\" (UniqueName: \"kubernetes.io/projected/c29a9366-4664-4228-af51-b56b63c976b6-kube-api-access-pm6b8\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054451 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054473 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6md\" (UniqueName: \"kubernetes.io/projected/6bf70521-8fdf-400f-b7cd-d96b609b4783-kube-api-access-kr6md\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054495 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-client-ca\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054521 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-encryption-config\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054542 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw74n\" (UniqueName: \"kubernetes.io/projected/ac875c91-285a-420b-9065-50af53ab50d3-kube-api-access-rw74n\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054566 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-image-import-ca\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054583 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29a9366-4664-4228-af51-b56b63c976b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054603 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf70521-8fdf-400f-b7cd-d96b609b4783-config\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054619 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054641 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-serving-cert\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054661 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6bf70521-8fdf-400f-b7cd-d96b609b4783-images\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054682 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-service-ca-bundle\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054716 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66ae9a2f-1c24-4a65-b961-bd9431c667f6-audit-dir\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054739 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwbv5\" (UniqueName: \"kubernetes.io/projected/f8a6e66d-c97e-43eb-8ff0-864e543f5488-kube-api-access-xwbv5\") pod \"cluster-samples-operator-665b6dd947-8btfx\" (UID: \"f8a6e66d-c97e-43eb-8ff0-864e543f5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054757 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d02c35e0-ade4-4316-b4da-88c6dd349220-machine-approver-tls\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054783 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl77h\" (UniqueName: \"kubernetes.io/projected/611eb8a8-5fc3-4325-96be-1dba1144259b-kube-api-access-cl77h\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054804 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8a6e66d-c97e-43eb-8ff0-864e543f5488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8btfx\" (UID: \"f8a6e66d-c97e-43eb-8ff0-864e543f5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054826 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-audit\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054850 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zm2\" (UniqueName: \"kubernetes.io/projected/d02c35e0-ade4-4316-b4da-88c6dd349220-kube-api-access-h4zm2\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054881 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-config\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054923 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d02c35e0-ade4-4316-b4da-88c6dd349220-config\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054940 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-serving-cert\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054962 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9gh4\" (UniqueName: \"kubernetes.io/projected/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-kube-api-access-x9gh4\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054993 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055012 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bf70521-8fdf-400f-b7cd-d96b609b4783-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055036 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-822n9\" (UniqueName: \"kubernetes.io/projected/a70e2a3d-9afe-4437-b9ef-fe175eee93d6-kube-api-access-822n9\") pod \"downloads-7954f5f757-4rbqr\" (UID: \"a70e2a3d-9afe-4437-b9ef-fe175eee93d6\") " pod="openshift-console/downloads-7954f5f757-4rbqr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055077 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055082 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkq4d\" (UniqueName: \"kubernetes.io/projected/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-kube-api-access-jkq4d\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055537 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d02c35e0-ade4-4316-b4da-88c6dd349220-auth-proxy-config\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055603 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/611eb8a8-5fc3-4325-96be-1dba1144259b-serving-cert\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.054952 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rbhk2"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055745 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/611eb8a8-5fc3-4325-96be-1dba1144259b-config\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055866 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-etcd-client\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.055962 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29a9366-4664-4228-af51-b56b63c976b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.056318 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.056423 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.057148 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.057427 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.058539 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2vcpg"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.058706 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.058921 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.059098 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mvp5t"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.059179 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.059296 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.059336 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.059451 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.059469 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.059179 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.060149 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.060376 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.064936 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xk8gn"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.071385 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.072061 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.072973 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.076045 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.078542 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.086084 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.087834 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.091326 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.092786 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4rbqr"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.094339 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.095862 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rvxdk"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.096105 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.096687 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.096839 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.096948 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.099732 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hr44j"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.100554 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.100814 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.104858 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vq42n"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.105902 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fz66j"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.108161 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.116466 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.125660 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.126237 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.126348 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.126627 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.135557 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.135993 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.136966 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.138339 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.139089 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.139262 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wc6bh"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.139610 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.144118 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.155246 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pt97w"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.156773 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d758a8-6722-4c1b-be56-fe2bb6d27830-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.156918 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5d758a8-6722-4c1b-be56-fe2bb6d27830-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157005 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157102 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-image-import-ca\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157197 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf70521-8fdf-400f-b7cd-d96b609b4783-config\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157284 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157366 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157458 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157538 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee590ca4-c2f6-4dcf-973d-df26701d689f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157617 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7dd651-1a0c-43b7-8c52-525200a7146c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157699 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157789 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-serving-cert\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158286 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-service-ca-bundle\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158378 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ab0690-627e-4d43-b80c-3b3f96b06249-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157168 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158514 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158470 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e903551f-3d78-4de4-a08a-ce9ea234942c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157489 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158579 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bcf9211-edc2-4706-a9ac-b5f38b856186-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158608 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bc78651-e3a0-4988-acfa-89a6391f4aa5-srv-cert\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158633 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158657 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7blj\" (UniqueName: \"kubernetes.io/projected/1b5822e3-ccfc-4261-9a39-8f02356add90-kube-api-access-l7blj\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158686 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee590ca4-c2f6-4dcf-973d-df26701d689f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158714 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl77h\" (UniqueName: \"kubernetes.io/projected/611eb8a8-5fc3-4325-96be-1dba1144259b-kube-api-access-cl77h\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158736 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81ab0690-627e-4d43-b80c-3b3f96b06249-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158760 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ad81540-c66f-4f41-98a5-12aa607142fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158780 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrd2n\" (UniqueName: \"kubernetes.io/projected/0bc78651-e3a0-4988-acfa-89a6391f4aa5-kube-api-access-qrd2n\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158412 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf70521-8fdf-400f-b7cd-d96b609b4783-config\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158800 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-ca\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.157980 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-image-import-ca\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158826 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-config\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158919 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d02c35e0-ade4-4316-b4da-88c6dd349220-config\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158941 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158945 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-service-ca-bundle\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158959 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bf70521-8fdf-400f-b7cd-d96b609b4783-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158978 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b5822e3-ccfc-4261-9a39-8f02356add90-proxy-tls\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.158996 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-audit-policies\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159105 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159424 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-config\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159482 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159511 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5d758a8-6722-4c1b-be56-fe2bb6d27830-config\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159533 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phvb\" (UniqueName: \"kubernetes.io/projected/e903551f-3d78-4de4-a08a-ce9ea234942c-kube-api-access-8phvb\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159509 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d02c35e0-ade4-4316-b4da-88c6dd349220-config\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159585 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e903551f-3d78-4de4-a08a-ce9ea234942c-metrics-tls\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159623 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/611eb8a8-5fc3-4325-96be-1dba1144259b-config\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159641 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7dd651-1a0c-43b7-8c52-525200a7146c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159677 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159707 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-etcd-client\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.159738 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29a9366-4664-4228-af51-b56b63c976b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.160195 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czplh\" (UniqueName: \"kubernetes.io/projected/3ad81540-c66f-4f41-98a5-12aa607142fd-kube-api-access-czplh\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.160333 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.160413 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/611eb8a8-5fc3-4325-96be-1dba1144259b-config\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.160464 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29a9366-4664-4228-af51-b56b63c976b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.160345 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98d47741-7063-487f-a38b-b9c398f3e07e-audit-dir\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.160667 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ad81540-c66f-4f41-98a5-12aa607142fd-srv-cert\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.160772 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac875c91-285a-420b-9065-50af53ab50d3-serving-cert\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.160874 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-serving-cert\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161026 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-client\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161134 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8291b32a-8322-4027-af13-cd9f10390406-service-ca-bundle\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161248 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66ae9a2f-1c24-4a65-b961-bd9431c667f6-node-pullsecrets\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161321 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66ae9a2f-1c24-4a65-b961-bd9431c667f6-node-pullsecrets\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161429 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-config\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161455 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2vcpg"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161537 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-oauth-serving-cert\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161599 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-default-certificate\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161621 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ngtx\" (UniqueName: \"kubernetes.io/projected/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-kube-api-access-7ngtx\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161644 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6b8\" (UniqueName: \"kubernetes.io/projected/c29a9366-4664-4228-af51-b56b63c976b6-kube-api-access-pm6b8\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.161897 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162151 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162238 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk9rv\" (UniqueName: \"kubernetes.io/projected/66ae9a2f-1c24-4a65-b961-bd9431c667f6-kube-api-access-fk9rv\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162294 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bc78651-e3a0-4988-acfa-89a6391f4aa5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162317 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr6md\" (UniqueName: \"kubernetes.io/projected/6bf70521-8fdf-400f-b7cd-d96b609b4783-kube-api-access-kr6md\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162335 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-client-ca\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162374 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2kj\" (UniqueName: \"kubernetes.io/projected/8dbf6657-96c2-472f-9e4c-0745a4c249be-kube-api-access-pv2kj\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162395 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162413 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-encryption-config\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162465 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ab0690-627e-4d43-b80c-3b3f96b06249-config\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.162849 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-config\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163075 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163164 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-client-ca\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163354 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstpx\" (UniqueName: \"kubernetes.io/projected/04d41e42-423a-4bac-bc05-3c424c978fd8-kube-api-access-bstpx\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163382 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-metrics-certs\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163420 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163588 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-etcd-client\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163710 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bf70521-8fdf-400f-b7cd-d96b609b4783-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163795 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw74n\" (UniqueName: \"kubernetes.io/projected/ac875c91-285a-420b-9065-50af53ab50d3-kube-api-access-rw74n\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163825 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29a9366-4664-4228-af51-b56b63c976b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.164190 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z9qjh"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.164210 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-serving-cert\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.163844 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8dbf6657-96c2-472f-9e4c-0745a4c249be-audit-dir\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.164433 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6bf70521-8fdf-400f-b7cd-d96b609b4783-images\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.164453 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmpd\" (UniqueName: \"kubernetes.io/projected/5bcf9211-edc2-4706-a9ac-b5f38b856186-kube-api-access-hkmpd\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.164872 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac875c91-285a-420b-9065-50af53ab50d3-serving-cert\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.164468 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-serving-cert\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.165422 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.165736 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6bf70521-8fdf-400f-b7cd-d96b609b4783-images\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.166051 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29a9366-4664-4228-af51-b56b63c976b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.167205 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/66ae9a2f-1c24-4a65-b961-bd9431c667f6-encryption-config\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.167258 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.168827 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-thf72"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.169674 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mgqnl"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.170231 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-serving-cert\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.170265 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mgqnl" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.171130 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-f2fvl"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.171858 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.165436 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q45wk\" (UniqueName: \"kubernetes.io/projected/d1f9a812-d62e-44ca-b83f-5f240ede92a0-kube-api-access-q45wk\") pod \"dns-operator-744455d44c-2vcpg\" (UID: \"d1f9a812-d62e-44ca-b83f-5f240ede92a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.172989 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a7dd651-1a0c-43b7-8c52-525200a7146c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173026 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwbv5\" (UniqueName: \"kubernetes.io/projected/f8a6e66d-c97e-43eb-8ff0-864e543f5488-kube-api-access-xwbv5\") pod \"cluster-samples-operator-665b6dd947-8btfx\" (UID: \"f8a6e66d-c97e-43eb-8ff0-864e543f5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173068 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66ae9a2f-1c24-4a65-b961-bd9431c667f6-audit-dir\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173143 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d02c35e0-ade4-4316-b4da-88c6dd349220-machine-approver-tls\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173176 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-service-ca\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173202 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bcf9211-edc2-4706-a9ac-b5f38b856186-proxy-tls\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173236 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l8w4\" (UniqueName: \"kubernetes.io/projected/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-kube-api-access-7l8w4\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173296 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b5822e3-ccfc-4261-9a39-8f02356add90-images\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173325 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6x5c\" (UniqueName: \"kubernetes.io/projected/98d47741-7063-487f-a38b-b9c398f3e07e-kube-api-access-c6x5c\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173357 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8a6e66d-c97e-43eb-8ff0-864e543f5488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8btfx\" (UID: \"f8a6e66d-c97e-43eb-8ff0-864e543f5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173396 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66ae9a2f-1c24-4a65-b961-bd9431c667f6-audit-dir\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173457 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173509 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-oauth-config\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173547 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-encryption-config\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173729 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-audit\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173824 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zm2\" (UniqueName: \"kubernetes.io/projected/d02c35e0-ade4-4316-b4da-88c6dd349220-kube-api-access-h4zm2\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173847 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173872 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-serving-cert\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173909 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9gh4\" (UniqueName: \"kubernetes.io/projected/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-kube-api-access-x9gh4\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173954 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee590ca4-c2f6-4dcf-973d-df26701d689f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.173988 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-config\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174011 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-audit-policies\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174048 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174080 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-822n9\" (UniqueName: \"kubernetes.io/projected/a70e2a3d-9afe-4437-b9ef-fe175eee93d6-kube-api-access-822n9\") pod \"downloads-7954f5f757-4rbqr\" (UID: \"a70e2a3d-9afe-4437-b9ef-fe175eee93d6\") " pod="openshift-console/downloads-7954f5f757-4rbqr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174119 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b5822e3-ccfc-4261-9a39-8f02356add90-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174145 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-trusted-ca-bundle\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174174 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkq4d\" (UniqueName: \"kubernetes.io/projected/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-kube-api-access-jkq4d\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174199 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-service-ca\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174224 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174257 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d02c35e0-ade4-4316-b4da-88c6dd349220-auth-proxy-config\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174279 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/611eb8a8-5fc3-4325-96be-1dba1144259b-serving-cert\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174301 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174329 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrcc7\" (UniqueName: \"kubernetes.io/projected/ee590ca4-c2f6-4dcf-973d-df26701d689f-kube-api-access-rrcc7\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174352 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-console-config\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174374 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e903551f-3d78-4de4-a08a-ce9ea234942c-trusted-ca\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174431 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174455 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r2jq\" (UniqueName: \"kubernetes.io/projected/8291b32a-8322-4027-af13-cd9f10390406-kube-api-access-8r2jq\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174492 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-serving-cert\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174512 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1f9a812-d62e-44ca-b83f-5f240ede92a0-metrics-tls\") pod \"dns-operator-744455d44c-2vcpg\" (UID: \"d1f9a812-d62e-44ca-b83f-5f240ede92a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174538 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/611eb8a8-5fc3-4325-96be-1dba1144259b-trusted-ca\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174563 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-config\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174605 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-serving-cert\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174626 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174646 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-stats-auth\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174677 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174703 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-etcd-client\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.174930 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.175718 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-config\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.176363 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.176472 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/66ae9a2f-1c24-4a65-b961-bd9431c667f6-audit\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.176537 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-serving-cert\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.177134 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/611eb8a8-5fc3-4325-96be-1dba1144259b-trusted-ca\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.177414 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.177800 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d02c35e0-ade4-4316-b4da-88c6dd349220-machine-approver-tls\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.177930 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d02c35e0-ade4-4316-b4da-88c6dd349220-auth-proxy-config\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.178406 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/611eb8a8-5fc3-4325-96be-1dba1144259b-serving-cert\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.178937 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8a6e66d-c97e-43eb-8ff0-864e543f5488-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8btfx\" (UID: \"f8a6e66d-c97e-43eb-8ff0-864e543f5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.179795 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.180939 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.182107 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.183293 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8gjpm"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.191665 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.192836 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.194292 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.195826 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.199767 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.201193 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w989s"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.202291 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xsdsh"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.203358 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.204468 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-f2fvl"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.205512 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rvxdk"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.207256 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vq42n"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.208347 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fz66j"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.209407 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.210425 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hr44j"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.211452 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.212445 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6szh7"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.212493 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.213425 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsz25"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.214561 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mgqnl"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.214722 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.215425 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsz25"] Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.233246 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.252751 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.272811 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275600 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-service-ca\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275640 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bcf9211-edc2-4706-a9ac-b5f38b856186-proxy-tls\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275664 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l8w4\" (UniqueName: \"kubernetes.io/projected/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-kube-api-access-7l8w4\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275689 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b5822e3-ccfc-4261-9a39-8f02356add90-images\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275716 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-oauth-config\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275739 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6x5c\" (UniqueName: \"kubernetes.io/projected/98d47741-7063-487f-a38b-b9c398f3e07e-kube-api-access-c6x5c\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275761 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-encryption-config\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275790 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275820 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee590ca4-c2f6-4dcf-973d-df26701d689f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275851 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-config\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275874 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-audit-policies\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275956 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.275996 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b5822e3-ccfc-4261-9a39-8f02356add90-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276023 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-trusted-ca-bundle\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276164 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-service-ca\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276192 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276234 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276261 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrcc7\" (UniqueName: \"kubernetes.io/projected/ee590ca4-c2f6-4dcf-973d-df26701d689f-kube-api-access-rrcc7\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276282 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-console-config\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276306 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e903551f-3d78-4de4-a08a-ce9ea234942c-trusted-ca\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276331 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276356 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r2jq\" (UniqueName: \"kubernetes.io/projected/8291b32a-8322-4027-af13-cd9f10390406-kube-api-access-8r2jq\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276376 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-serving-cert\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276396 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1f9a812-d62e-44ca-b83f-5f240ede92a0-metrics-tls\") pod \"dns-operator-744455d44c-2vcpg\" (UID: \"d1f9a812-d62e-44ca-b83f-5f240ede92a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276418 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-serving-cert\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276437 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276458 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-stats-auth\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276481 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276505 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-etcd-client\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276527 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d758a8-6722-4c1b-be56-fe2bb6d27830-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276547 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5d758a8-6722-4c1b-be56-fe2bb6d27830-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276567 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276591 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276614 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee590ca4-c2f6-4dcf-973d-df26701d689f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276636 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7dd651-1a0c-43b7-8c52-525200a7146c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276661 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276686 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276711 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ab0690-627e-4d43-b80c-3b3f96b06249-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276737 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e903551f-3d78-4de4-a08a-ce9ea234942c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276762 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bcf9211-edc2-4706-a9ac-b5f38b856186-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276792 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bc78651-e3a0-4988-acfa-89a6391f4aa5-srv-cert\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276812 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276836 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7blj\" (UniqueName: \"kubernetes.io/projected/1b5822e3-ccfc-4261-9a39-8f02356add90-kube-api-access-l7blj\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276859 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee590ca4-c2f6-4dcf-973d-df26701d689f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276901 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81ab0690-627e-4d43-b80c-3b3f96b06249-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276934 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ad81540-c66f-4f41-98a5-12aa607142fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276957 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrd2n\" (UniqueName: \"kubernetes.io/projected/0bc78651-e3a0-4988-acfa-89a6391f4aa5-kube-api-access-qrd2n\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.276977 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-ca\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277001 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b5822e3-ccfc-4261-9a39-8f02356add90-proxy-tls\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277021 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-audit-policies\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277041 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277060 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5d758a8-6722-4c1b-be56-fe2bb6d27830-config\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277083 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phvb\" (UniqueName: \"kubernetes.io/projected/e903551f-3d78-4de4-a08a-ce9ea234942c-kube-api-access-8phvb\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277117 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7dd651-1a0c-43b7-8c52-525200a7146c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277137 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e903551f-3d78-4de4-a08a-ce9ea234942c-metrics-tls\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277164 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277189 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czplh\" (UniqueName: \"kubernetes.io/projected/3ad81540-c66f-4f41-98a5-12aa607142fd-kube-api-access-czplh\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277210 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98d47741-7063-487f-a38b-b9c398f3e07e-audit-dir\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277212 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277229 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ad81540-c66f-4f41-98a5-12aa607142fd-srv-cert\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277272 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-client\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277294 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8291b32a-8322-4027-af13-cd9f10390406-service-ca-bundle\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277290 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277319 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-oauth-serving-cert\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277336 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-default-certificate\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277356 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ngtx\" (UniqueName: \"kubernetes.io/projected/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-kube-api-access-7ngtx\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277391 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bc78651-e3a0-4988-acfa-89a6391f4aa5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277665 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv2kj\" (UniqueName: \"kubernetes.io/projected/8dbf6657-96c2-472f-9e4c-0745a4c249be-kube-api-access-pv2kj\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277710 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ab0690-627e-4d43-b80c-3b3f96b06249-config\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277735 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstpx\" (UniqueName: \"kubernetes.io/projected/04d41e42-423a-4bac-bc05-3c424c978fd8-kube-api-access-bstpx\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277773 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-metrics-certs\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277802 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277828 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8dbf6657-96c2-472f-9e4c-0745a4c249be-audit-dir\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277852 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkmpd\" (UniqueName: \"kubernetes.io/projected/5bcf9211-edc2-4706-a9ac-b5f38b856186-kube-api-access-hkmpd\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277874 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-serving-cert\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277905 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q45wk\" (UniqueName: \"kubernetes.io/projected/d1f9a812-d62e-44ca-b83f-5f240ede92a0-kube-api-access-q45wk\") pod \"dns-operator-744455d44c-2vcpg\" (UID: \"d1f9a812-d62e-44ca-b83f-5f240ede92a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.277924 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a7dd651-1a0c-43b7-8c52-525200a7146c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.278257 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-audit-policies\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.278344 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-audit-policies\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.278768 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8dbf6657-96c2-472f-9e4c-0745a4c249be-audit-dir\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.279116 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7dd651-1a0c-43b7-8c52-525200a7146c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.279470 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-service-ca\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.279716 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.279864 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-console-config\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.280389 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-trusted-ca-bundle\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.280690 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-oauth-serving-cert\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.281119 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98d47741-7063-487f-a38b-b9c398f3e07e-audit-dir\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.281184 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-oauth-config\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.278255 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.281223 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-encryption-config\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.281472 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b5822e3-ccfc-4261-9a39-8f02356add90-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.281594 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee590ca4-c2f6-4dcf-973d-df26701d689f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.281806 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dbf6657-96c2-472f-9e4c-0745a4c249be-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.282275 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bcf9211-edc2-4706-a9ac-b5f38b856186-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.282305 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.282401 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.282688 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.282766 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-serving-cert\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.282943 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee590ca4-c2f6-4dcf-973d-df26701d689f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.283148 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7dd651-1a0c-43b7-8c52-525200a7146c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.283349 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.283711 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-serving-cert\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.284203 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.284839 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8dbf6657-96c2-472f-9e4c-0745a4c249be-etcd-client\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.284933 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5d758a8-6722-4c1b-be56-fe2bb6d27830-config\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.285233 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e903551f-3d78-4de4-a08a-ce9ea234942c-metrics-tls\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.285509 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.286215 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e903551f-3d78-4de4-a08a-ce9ea234942c-trusted-ca\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.286457 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.286641 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5d758a8-6722-4c1b-be56-fe2bb6d27830-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.287370 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.292973 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.313370 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.332690 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.352749 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.363184 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-serving-cert\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.372875 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.380407 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-client\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.393483 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.413733 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.417047 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-config\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.433318 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.438044 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-ca\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.453499 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.457027 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-etcd-service-ca\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.493070 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.513132 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.520302 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bcf9211-edc2-4706-a9ac-b5f38b856186-proxy-tls\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.532963 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.541673 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-default-certificate\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.553457 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.561040 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-metrics-certs\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.572318 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.595613 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.599642 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8291b32a-8322-4027-af13-cd9f10390406-service-ca-bundle\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.613244 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.623163 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8291b32a-8322-4027-af13-cd9f10390406-stats-auth\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.632934 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.654546 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.673669 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.694879 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.706175 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ab0690-627e-4d43-b80c-3b3f96b06249-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.713536 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.733238 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.753608 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.761651 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.773750 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.793698 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.813813 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.822338 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.833181 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.837706 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b5822e3-ccfc-4261-9a39-8f02356add90-images\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.853991 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.874094 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.883814 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b5822e3-ccfc-4261-9a39-8f02356add90-proxy-tls\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.893815 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.913450 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.934116 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.940447 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ab0690-627e-4d43-b80c-3b3f96b06249-config\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.954111 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.961417 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1f9a812-d62e-44ca-b83f-5f240ede92a0-metrics-tls\") pod \"dns-operator-744455d44c-2vcpg\" (UID: \"d1f9a812-d62e-44ca-b83f-5f240ede92a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.973807 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 10:34:20 crc kubenswrapper[4845]: I0202 10:34:20.993403 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.014049 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.033865 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.053442 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.071998 4845 request.go:700] Waited for 1.011531519s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.073702 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.093787 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.114209 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.126623 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bc78651-e3a0-4988-acfa-89a6391f4aa5-srv-cert\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.133600 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.142843 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ad81540-c66f-4f41-98a5-12aa607142fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.143154 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bc78651-e3a0-4988-acfa-89a6391f4aa5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.153675 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.177294 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.213990 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.223788 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ad81540-c66f-4f41-98a5-12aa607142fd-srv-cert\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.233166 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.253460 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.273036 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.293782 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.313634 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.346855 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.353534 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.373757 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.394175 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.413865 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.434126 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.453928 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.473309 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.493219 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.513668 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.533879 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.554573 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.573273 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.594293 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.613730 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.633596 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.655675 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.674071 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.694804 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.713601 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.734353 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.754570 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.775460 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.793941 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.829620 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl77h\" (UniqueName: \"kubernetes.io/projected/611eb8a8-5fc3-4325-96be-1dba1144259b-kube-api-access-cl77h\") pod \"console-operator-58897d9998-6szh7\" (UID: \"611eb8a8-5fc3-4325-96be-1dba1144259b\") " pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.859327 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk9rv\" (UniqueName: \"kubernetes.io/projected/66ae9a2f-1c24-4a65-b961-bd9431c667f6-kube-api-access-fk9rv\") pod \"apiserver-76f77b778f-xsdsh\" (UID: \"66ae9a2f-1c24-4a65-b961-bd9431c667f6\") " pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.887566 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr6md\" (UniqueName: \"kubernetes.io/projected/6bf70521-8fdf-400f-b7cd-d96b609b4783-kube-api-access-kr6md\") pod \"machine-api-operator-5694c8668f-xk8gn\" (UID: \"6bf70521-8fdf-400f-b7cd-d96b609b4783\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.899438 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6b8\" (UniqueName: \"kubernetes.io/projected/c29a9366-4664-4228-af51-b56b63c976b6-kube-api-access-pm6b8\") pod \"openshift-apiserver-operator-796bbdcf4f-rch6p\" (UID: \"c29a9366-4664-4228-af51-b56b63c976b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.914668 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw74n\" (UniqueName: \"kubernetes.io/projected/ac875c91-285a-420b-9065-50af53ab50d3-kube-api-access-rw74n\") pod \"route-controller-manager-6576b87f9c-jvc49\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.915932 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.928424 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.932918 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.953680 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.973001 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 10:34:21 crc kubenswrapper[4845]: I0202 10:34:21.993045 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.012793 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.035612 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.062237 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.069951 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwbv5\" (UniqueName: \"kubernetes.io/projected/f8a6e66d-c97e-43eb-8ff0-864e543f5488-kube-api-access-xwbv5\") pod \"cluster-samples-operator-665b6dd947-8btfx\" (UID: \"f8a6e66d-c97e-43eb-8ff0-864e543f5488\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.079783 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.091613 4845 request.go:700] Waited for 1.916291334s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.094732 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkq4d\" (UniqueName: \"kubernetes.io/projected/6de6b4aa-d335-4eb0-b880-7a21c9336ebf-kube-api-access-jkq4d\") pod \"openshift-config-operator-7777fb866f-x9pr7\" (UID: \"6de6b4aa-d335-4eb0-b880-7a21c9336ebf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.116631 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9gh4\" (UniqueName: \"kubernetes.io/projected/1eb84a90-bb90-4b5f-9e30-7415cb27cd39-kube-api-access-x9gh4\") pod \"authentication-operator-69f744f599-mvp5t\" (UID: \"1eb84a90-bb90-4b5f-9e30-7415cb27cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.126679 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.127025 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zm2\" (UniqueName: \"kubernetes.io/projected/d02c35e0-ade4-4316-b4da-88c6dd349220-kube-api-access-h4zm2\") pod \"machine-approver-56656f9798-bl6zg\" (UID: \"d02c35e0-ade4-4316-b4da-88c6dd349220\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.134826 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6szh7"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.153185 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.153326 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.159177 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-822n9\" (UniqueName: \"kubernetes.io/projected/a70e2a3d-9afe-4437-b9ef-fe175eee93d6-kube-api-access-822n9\") pod \"downloads-7954f5f757-4rbqr\" (UID: \"a70e2a3d-9afe-4437-b9ef-fe175eee93d6\") " pod="openshift-console/downloads-7954f5f757-4rbqr" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.162498 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.173104 4845 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.174369 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4rbqr" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.187119 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.193209 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.197183 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.210540 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.229191 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l8w4\" (UniqueName: \"kubernetes.io/projected/2b15a9aa-d443-4058-8e02-0f0eafbd7dd9-kube-api-access-7l8w4\") pod \"etcd-operator-b45778765-pt97w\" (UID: \"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.248727 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6x5c\" (UniqueName: \"kubernetes.io/projected/98d47741-7063-487f-a38b-b9c398f3e07e-kube-api-access-c6x5c\") pod \"oauth-openshift-558db77b4-w989s\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.270596 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xsdsh"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.276386 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.278500 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81ab0690-627e-4d43-b80c-3b3f96b06249-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-25kbf\" (UID: \"81ab0690-627e-4d43-b80c-3b3f96b06249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.294143 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrd2n\" (UniqueName: \"kubernetes.io/projected/0bc78651-e3a0-4988-acfa-89a6391f4aa5-kube-api-access-qrd2n\") pod \"olm-operator-6b444d44fb-t62rp\" (UID: \"0bc78651-e3a0-4988-acfa-89a6391f4aa5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.303140 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.306128 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r2jq\" (UniqueName: \"kubernetes.io/projected/8291b32a-8322-4027-af13-cd9f10390406-kube-api-access-8r2jq\") pod \"router-default-5444994796-rbhk2\" (UID: \"8291b32a-8322-4027-af13-cd9f10390406\") " pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.330117 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.330918 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ngtx\" (UniqueName: \"kubernetes.io/projected/aa7bf903-1f8f-4d7c-b5a1-33a07160500f-kube-api-access-7ngtx\") pod \"openshift-controller-manager-operator-756b6f6bc6-rfh2j\" (UID: \"aa7bf903-1f8f-4d7c-b5a1-33a07160500f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.346123 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.361802 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xk8gn"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.361810 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.368796 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a7dd651-1a0c-43b7-8c52-525200a7146c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f4gzw\" (UID: \"3a7dd651-1a0c-43b7-8c52-525200a7146c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.373254 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv2kj\" (UniqueName: \"kubernetes.io/projected/8dbf6657-96c2-472f-9e4c-0745a4c249be-kube-api-access-pv2kj\") pod \"apiserver-7bbb656c7d-49vf9\" (UID: \"8dbf6657-96c2-472f-9e4c-0745a4c249be\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.389305 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstpx\" (UniqueName: \"kubernetes.io/projected/04d41e42-423a-4bac-bc05-3c424c978fd8-kube-api-access-bstpx\") pod \"console-f9d7485db-8gjpm\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.406708 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee590ca4-c2f6-4dcf-973d-df26701d689f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.410838 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.426602 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkmpd\" (UniqueName: \"kubernetes.io/projected/5bcf9211-edc2-4706-a9ac-b5f38b856186-kube-api-access-hkmpd\") pod \"machine-config-controller-84d6567774-qx7sb\" (UID: \"5bcf9211-edc2-4706-a9ac-b5f38b856186\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.429503 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:22 crc kubenswrapper[4845]: W0202 10:34:22.451235 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8291b32a_8322_4027_af13_cd9f10390406.slice/crio-2739cd6071c9c851dfa234cb94024efbbd69ed061825d9ea4a08b2cf5cd70d07 WatchSource:0}: Error finding container 2739cd6071c9c851dfa234cb94024efbbd69ed061825d9ea4a08b2cf5cd70d07: Status 404 returned error can't find the container with id 2739cd6071c9c851dfa234cb94024efbbd69ed061825d9ea4a08b2cf5cd70d07 Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.452486 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrcc7\" (UniqueName: \"kubernetes.io/projected/ee590ca4-c2f6-4dcf-973d-df26701d689f-kube-api-access-rrcc7\") pod \"cluster-image-registry-operator-dc59b4c8b-m55h8\" (UID: \"ee590ca4-c2f6-4dcf-973d-df26701d689f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.468935 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czplh\" (UniqueName: \"kubernetes.io/projected/3ad81540-c66f-4f41-98a5-12aa607142fd-kube-api-access-czplh\") pod \"catalog-operator-68c6474976-zznfs\" (UID: \"3ad81540-c66f-4f41-98a5-12aa607142fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.487307 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phvb\" (UniqueName: \"kubernetes.io/projected/e903551f-3d78-4de4-a08a-ce9ea234942c-kube-api-access-8phvb\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.496981 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.498826 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6szh7" event={"ID":"611eb8a8-5fc3-4325-96be-1dba1144259b","Type":"ContainerStarted","Data":"f5fc2b79ee1a35c707a25e545423c9c4a81a43e2d7e74219382e255e6642c953"} Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.498854 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6szh7" event={"ID":"611eb8a8-5fc3-4325-96be-1dba1144259b","Type":"ContainerStarted","Data":"3b01c61a80a2a1ddd99e79d54045acec2e6a2a588ce464b85b6d9784ba96ab8a"} Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.499534 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.500159 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" event={"ID":"6bf70521-8fdf-400f-b7cd-d96b609b4783","Type":"ContainerStarted","Data":"1376156431a753612df059d26a63e0f28fa28d2f0b741c60818c98e052d0836b"} Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.500715 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rbhk2" event={"ID":"8291b32a-8322-4027-af13-cd9f10390406","Type":"ContainerStarted","Data":"2739cd6071c9c851dfa234cb94024efbbd69ed061825d9ea4a08b2cf5cd70d07"} Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.505326 4845 patch_prober.go:28] interesting pod/console-operator-58897d9998-6szh7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.505371 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6szh7" podUID="611eb8a8-5fc3-4325-96be-1dba1144259b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.507510 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e903551f-3d78-4de4-a08a-ce9ea234942c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-h92cr\" (UID: \"e903551f-3d78-4de4-a08a-ce9ea234942c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.513531 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" event={"ID":"ac875c91-285a-420b-9065-50af53ab50d3","Type":"ContainerStarted","Data":"dca4acc312ecd37056dbc4edd5440def5a4b22eb4ea478d220c28a8e7aa4f810"} Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.528764 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" event={"ID":"66ae9a2f-1c24-4a65-b961-bd9431c667f6","Type":"ContainerStarted","Data":"439d03444803cd71a625bdc8afb36c9934af3b183281b8357bbd2b8cdba5e60d"} Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.530016 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" event={"ID":"d02c35e0-ade4-4316-b4da-88c6dd349220","Type":"ContainerStarted","Data":"6da09c3a2ec613197b31bce44f495dfc90eebc7f5768603b22daa847db1f963b"} Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.534759 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q45wk\" (UniqueName: \"kubernetes.io/projected/d1f9a812-d62e-44ca-b83f-5f240ede92a0-kube-api-access-q45wk\") pod \"dns-operator-744455d44c-2vcpg\" (UID: \"d1f9a812-d62e-44ca-b83f-5f240ede92a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.549116 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4rbqr"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.551045 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.552894 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7blj\" (UniqueName: \"kubernetes.io/projected/1b5822e3-ccfc-4261-9a39-8f02356add90-kube-api-access-l7blj\") pod \"machine-config-operator-74547568cd-m9h2g\" (UID: \"1b5822e3-ccfc-4261-9a39-8f02356add90\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.564488 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.583917 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d758a8-6722-4c1b-be56-fe2bb6d27830-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qmvcn\" (UID: \"b5d758a8-6722-4c1b-be56-fe2bb6d27830\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.589260 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.598051 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.601531 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7"] Feb 02 10:34:22 crc kubenswrapper[4845]: W0202 10:34:22.604247 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda70e2a3d_9afe_4437_b9ef_fe175eee93d6.slice/crio-7479615bc5acb0d95dc873a7b6422c041138cc9e9f9895eb879626d491a81011 WatchSource:0}: Error finding container 7479615bc5acb0d95dc873a7b6422c041138cc9e9f9895eb879626d491a81011: Status 404 returned error can't find the container with id 7479615bc5acb0d95dc873a7b6422c041138cc9e9f9895eb879626d491a81011 Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.610231 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.614987 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615048 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgm6\" (UniqueName: \"kubernetes.io/projected/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-kube-api-access-lpgm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615095 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzkk\" (UniqueName: \"kubernetes.io/projected/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-kube-api-access-4xzkk\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615111 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/339fe372-b3de-4832-b32f-0218d2c0545b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615190 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-config\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615259 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615298 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-client-ca\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615331 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-trusted-ca\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615348 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77qx\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-kube-api-access-g77qx\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615384 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-bound-sa-token\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615403 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-serving-cert\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615420 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/339fe372-b3de-4832-b32f-0218d2c0545b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615456 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-registry-certificates\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615484 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-registry-tls\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615508 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.615550 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: E0202 10:34:22.615933 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.115922586 +0000 UTC m=+144.207324036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.616316 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" Feb 02 10:34:22 crc kubenswrapper[4845]: W0202 10:34:22.632070 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6de6b4aa_d335_4eb0_b880_7a21c9336ebf.slice/crio-11ea871b8e34a0eef6934dd0d4b800fe4ce3f391c95fdd680371c20dfc11e944 WatchSource:0}: Error finding container 11ea871b8e34a0eef6934dd0d4b800fe4ce3f391c95fdd680371c20dfc11e944: Status 404 returned error can't find the container with id 11ea871b8e34a0eef6934dd0d4b800fe4ce3f391c95fdd680371c20dfc11e944 Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.637996 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.644245 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mvp5t"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.654739 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w989s"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.655424 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.670942 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.681651 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p"] Feb 02 10:34:22 crc kubenswrapper[4845]: E0202 10:34:22.717503 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.217470319 +0000 UTC m=+144.308871769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.717545 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718038 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fb9557-bbfb-42e4-ba6c-522685082e66-cert\") pod \"ingress-canary-mgqnl\" (UID: \"82fb9557-bbfb-42e4-ba6c-522685082e66\") " pod="openshift-ingress-canary/ingress-canary-mgqnl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718298 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hl8d\" (UniqueName: \"kubernetes.io/projected/82fb9557-bbfb-42e4-ba6c-522685082e66-kube-api-access-2hl8d\") pod \"ingress-canary-mgqnl\" (UID: \"82fb9557-bbfb-42e4-ba6c-522685082e66\") " pod="openshift-ingress-canary/ingress-canary-mgqnl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718382 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xghn\" (UniqueName: \"kubernetes.io/projected/54f66031-6300-4334-8a24-bfe02897b467-kube-api-access-7xghn\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718485 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xzkk\" (UniqueName: \"kubernetes.io/projected/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-kube-api-access-4xzkk\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718518 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb6c76e-4ee2-4dcc-91a9-c91e25299780-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s7qmj\" (UID: \"ccb6c76e-4ee2-4dcc-91a9-c91e25299780\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718549 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j97kh\" (UniqueName: \"kubernetes.io/projected/722bda9f-5a8b-4c83-8b1f-790da0003ce9-kube-api-access-j97kh\") pod \"control-plane-machine-set-operator-78cbb6b69f-c46tw\" (UID: \"722bda9f-5a8b-4c83-8b1f-790da0003ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718618 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/339fe372-b3de-4832-b32f-0218d2c0545b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718650 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-csi-data-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718685 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-config\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718738 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718775 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a47109aa-f36b-4a01-89d4-832ff0a7a700-serving-cert\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718870 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-client-ca\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.718921 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-mountpoint-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719020 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-trusted-ca\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719095 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-bound-sa-token\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719120 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77qx\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-kube-api-access-g77qx\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719192 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a64280-2fd1-4149-826a-1f0daed66dc1-config-volume\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719252 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-serving-cert\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719282 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-plugins-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719368 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-config-volume\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719413 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/339fe372-b3de-4832-b32f-0218d2c0545b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719463 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f295e287-05b6-45e1-bfd5-3c71d7a87f15-tmpfs\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719494 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/028dfe05-0d8f-4d6f-b5f4-af641b911b52-signing-cabundle\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719544 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-certs\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719626 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlq47\" (UniqueName: \"kubernetes.io/projected/be245fb2-4ef3-4642-aae0-14954ab28ffa-kube-api-access-wlq47\") pod \"multus-admission-controller-857f4d67dd-rvxdk\" (UID: \"be245fb2-4ef3-4642-aae0-14954ab28ffa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719685 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-registry-certificates\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719713 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a47109aa-f36b-4a01-89d4-832ff0a7a700-config\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719743 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be245fb2-4ef3-4642-aae0-14954ab28ffa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rvxdk\" (UID: \"be245fb2-4ef3-4642-aae0-14954ab28ffa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.719772 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xnxk\" (UniqueName: \"kubernetes.io/projected/f295e287-05b6-45e1-bfd5-3c71d7a87f15-kube-api-access-9xnxk\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.721429 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/339fe372-b3de-4832-b32f-0218d2c0545b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.722200 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-config\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.722602 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-trusted-ca\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.724414 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-registry-certificates\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.725927 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-client-ca\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.725972 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x654k\" (UniqueName: \"kubernetes.io/projected/a47109aa-f36b-4a01-89d4-832ff0a7a700-kube-api-access-x654k\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.726074 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-registry-tls\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.726102 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7a64280-2fd1-4149-826a-1f0daed66dc1-metrics-tls\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.726216 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-node-bootstrap-token\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.726292 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.726497 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/722bda9f-5a8b-4c83-8b1f-790da0003ce9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c46tw\" (UID: \"722bda9f-5a8b-4c83-8b1f-790da0003ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.726661 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.726685 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: E0202 10:34:22.727489 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.227475479 +0000 UTC m=+144.318876929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.727547 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cqbz\" (UniqueName: \"kubernetes.io/projected/ccb6c76e-4ee2-4dcc-91a9-c91e25299780-kube-api-access-6cqbz\") pod \"package-server-manager-789f6589d5-s7qmj\" (UID: \"ccb6c76e-4ee2-4dcc-91a9-c91e25299780\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.727577 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f295e287-05b6-45e1-bfd5-3c71d7a87f15-apiservice-cert\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.727780 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-serving-cert\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.727937 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.728179 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/339fe372-b3de-4832-b32f-0218d2c0545b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.728303 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x7f5\" (UniqueName: \"kubernetes.io/projected/b7a64280-2fd1-4149-826a-1f0daed66dc1-kube-api-access-9x7f5\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.728536 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.728608 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-socket-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.728661 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7lz\" (UniqueName: \"kubernetes.io/projected/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-kube-api-access-mk7lz\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.728751 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfmb7\" (UniqueName: \"kubernetes.io/projected/6bedd3e9-4212-4b9b-866a-a473d7f1c632-kube-api-access-tfmb7\") pod \"migrator-59844c95c7-r92c5\" (UID: \"6bedd3e9-4212-4b9b-866a-a473d7f1c632\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.729305 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f295e287-05b6-45e1-bfd5-3c71d7a87f15-webhook-cert\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.730170 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-registration-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.730235 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-secret-volume\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.730302 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6jd9\" (UniqueName: \"kubernetes.io/projected/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-kube-api-access-j6jd9\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.730360 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/028dfe05-0d8f-4d6f-b5f4-af641b911b52-signing-key\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.730389 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.730595 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhrk\" (UniqueName: \"kubernetes.io/projected/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-kube-api-access-9mhrk\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.730654 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjf2\" (UniqueName: \"kubernetes.io/projected/028dfe05-0d8f-4d6f-b5f4-af641b911b52-kube-api-access-chjf2\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.730676 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpgm6\" (UniqueName: \"kubernetes.io/projected/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-kube-api-access-lpgm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.732652 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.734493 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.742582 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-registry-tls\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.753929 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.764451 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-bound-sa-token\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.764731 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pt97w"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.772541 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77qx\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-kube-api-access-g77qx\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.787345 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp"] Feb 02 10:34:22 crc kubenswrapper[4845]: W0202 10:34:22.792917 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc29a9366_4664_4228_af51_b56b63c976b6.slice/crio-f729fd0ab07975c64a8bf46c2b75ee8ebbbb431631f9eabfbd59b576e27c02a4 WatchSource:0}: Error finding container f729fd0ab07975c64a8bf46c2b75ee8ebbbb431631f9eabfbd59b576e27c02a4: Status 404 returned error can't find the container with id f729fd0ab07975c64a8bf46c2b75ee8ebbbb431631f9eabfbd59b576e27c02a4 Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.805789 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xzkk\" (UniqueName: \"kubernetes.io/projected/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-kube-api-access-4xzkk\") pod \"controller-manager-879f6c89f-z9qjh\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.817361 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpgm6\" (UniqueName: \"kubernetes.io/projected/daa4c1cf-5cd2-4dba-8ddb-543a716a4628-kube-api-access-lpgm6\") pod \"kube-storage-version-migrator-operator-b67b599dd-ns95h\" (UID: \"daa4c1cf-5cd2-4dba-8ddb-543a716a4628\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831400 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831600 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a64280-2fd1-4149-826a-1f0daed66dc1-config-volume\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831624 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-plugins-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: E0202 10:34:22.831655 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.331632138 +0000 UTC m=+144.423033588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831803 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-config-volume\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831820 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-plugins-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831843 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f295e287-05b6-45e1-bfd5-3c71d7a87f15-tmpfs\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831863 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/028dfe05-0d8f-4d6f-b5f4-af641b911b52-signing-cabundle\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831902 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-certs\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831920 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be245fb2-4ef3-4642-aae0-14954ab28ffa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rvxdk\" (UID: \"be245fb2-4ef3-4642-aae0-14954ab28ffa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831937 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlq47\" (UniqueName: \"kubernetes.io/projected/be245fb2-4ef3-4642-aae0-14954ab28ffa-kube-api-access-wlq47\") pod \"multus-admission-controller-857f4d67dd-rvxdk\" (UID: \"be245fb2-4ef3-4642-aae0-14954ab28ffa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831956 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a47109aa-f36b-4a01-89d4-832ff0a7a700-config\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831973 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xnxk\" (UniqueName: \"kubernetes.io/projected/f295e287-05b6-45e1-bfd5-3c71d7a87f15-kube-api-access-9xnxk\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.831996 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x654k\" (UniqueName: \"kubernetes.io/projected/a47109aa-f36b-4a01-89d4-832ff0a7a700-kube-api-access-x654k\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832020 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7a64280-2fd1-4149-826a-1f0daed66dc1-metrics-tls\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832037 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-node-bootstrap-token\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832053 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/722bda9f-5a8b-4c83-8b1f-790da0003ce9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c46tw\" (UID: \"722bda9f-5a8b-4c83-8b1f-790da0003ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832076 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832093 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cqbz\" (UniqueName: \"kubernetes.io/projected/ccb6c76e-4ee2-4dcc-91a9-c91e25299780-kube-api-access-6cqbz\") pod \"package-server-manager-789f6589d5-s7qmj\" (UID: \"ccb6c76e-4ee2-4dcc-91a9-c91e25299780\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832110 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832127 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f295e287-05b6-45e1-bfd5-3c71d7a87f15-apiservice-cert\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832162 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk7lz\" (UniqueName: \"kubernetes.io/projected/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-kube-api-access-mk7lz\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832307 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x7f5\" (UniqueName: \"kubernetes.io/projected/b7a64280-2fd1-4149-826a-1f0daed66dc1-kube-api-access-9x7f5\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832326 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-socket-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832351 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfmb7\" (UniqueName: \"kubernetes.io/projected/6bedd3e9-4212-4b9b-866a-a473d7f1c632-kube-api-access-tfmb7\") pod \"migrator-59844c95c7-r92c5\" (UID: \"6bedd3e9-4212-4b9b-866a-a473d7f1c632\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832387 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f295e287-05b6-45e1-bfd5-3c71d7a87f15-webhook-cert\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832405 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-registration-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832422 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-secret-volume\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832440 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6jd9\" (UniqueName: \"kubernetes.io/projected/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-kube-api-access-j6jd9\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832455 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/028dfe05-0d8f-4d6f-b5f4-af641b911b52-signing-key\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832472 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832491 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhrk\" (UniqueName: \"kubernetes.io/projected/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-kube-api-access-9mhrk\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832510 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjf2\" (UniqueName: \"kubernetes.io/projected/028dfe05-0d8f-4d6f-b5f4-af641b911b52-kube-api-access-chjf2\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832526 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fb9557-bbfb-42e4-ba6c-522685082e66-cert\") pod \"ingress-canary-mgqnl\" (UID: \"82fb9557-bbfb-42e4-ba6c-522685082e66\") " pod="openshift-ingress-canary/ingress-canary-mgqnl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832620 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-config-volume\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832977 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hl8d\" (UniqueName: \"kubernetes.io/projected/82fb9557-bbfb-42e4-ba6c-522685082e66-kube-api-access-2hl8d\") pod \"ingress-canary-mgqnl\" (UID: \"82fb9557-bbfb-42e4-ba6c-522685082e66\") " pod="openshift-ingress-canary/ingress-canary-mgqnl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.833014 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xghn\" (UniqueName: \"kubernetes.io/projected/54f66031-6300-4334-8a24-bfe02897b467-kube-api-access-7xghn\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.833108 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb6c76e-4ee2-4dcc-91a9-c91e25299780-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s7qmj\" (UID: \"ccb6c76e-4ee2-4dcc-91a9-c91e25299780\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.833124 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-csi-data-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.833140 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j97kh\" (UniqueName: \"kubernetes.io/projected/722bda9f-5a8b-4c83-8b1f-790da0003ce9-kube-api-access-j97kh\") pod \"control-plane-machine-set-operator-78cbb6b69f-c46tw\" (UID: \"722bda9f-5a8b-4c83-8b1f-790da0003ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.833160 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a47109aa-f36b-4a01-89d4-832ff0a7a700-serving-cert\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.833176 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-mountpoint-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.833181 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f295e287-05b6-45e1-bfd5-3c71d7a87f15-tmpfs\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.833960 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/028dfe05-0d8f-4d6f-b5f4-af641b911b52-signing-cabundle\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.834352 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-socket-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: E0202 10:34:22.834500 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.334483861 +0000 UTC m=+144.425885421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.836269 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-registration-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.835878 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-csi-data-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.832419 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a64280-2fd1-4149-826a-1f0daed66dc1-config-volume\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.839037 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/54f66031-6300-4334-8a24-bfe02897b467-mountpoint-dir\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.842581 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.843455 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7a64280-2fd1-4149-826a-1f0daed66dc1-metrics-tls\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.844310 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-secret-volume\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.844933 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.845408 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a47109aa-f36b-4a01-89d4-832ff0a7a700-config\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.845696 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be245fb2-4ef3-4642-aae0-14954ab28ffa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rvxdk\" (UID: \"be245fb2-4ef3-4642-aae0-14954ab28ffa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.846804 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fb9557-bbfb-42e4-ba6c-522685082e66-cert\") pod \"ingress-canary-mgqnl\" (UID: \"82fb9557-bbfb-42e4-ba6c-522685082e66\") " pod="openshift-ingress-canary/ingress-canary-mgqnl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.848312 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.850514 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/722bda9f-5a8b-4c83-8b1f-790da0003ce9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c46tw\" (UID: \"722bda9f-5a8b-4c83-8b1f-790da0003ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.851383 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f295e287-05b6-45e1-bfd5-3c71d7a87f15-apiservice-cert\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.852123 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb6c76e-4ee2-4dcc-91a9-c91e25299780-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s7qmj\" (UID: \"ccb6c76e-4ee2-4dcc-91a9-c91e25299780\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.852466 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-certs\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.856654 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/028dfe05-0d8f-4d6f-b5f4-af641b911b52-signing-key\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.857155 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-node-bootstrap-token\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.858597 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.861305 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f295e287-05b6-45e1-bfd5-3c71d7a87f15-webhook-cert\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.863061 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a47109aa-f36b-4a01-89d4-832ff0a7a700-serving-cert\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.867105 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlq47\" (UniqueName: \"kubernetes.io/projected/be245fb2-4ef3-4642-aae0-14954ab28ffa-kube-api-access-wlq47\") pod \"multus-admission-controller-857f4d67dd-rvxdk\" (UID: \"be245fb2-4ef3-4642-aae0-14954ab28ffa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.894071 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6jd9\" (UniqueName: \"kubernetes.io/projected/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-kube-api-access-j6jd9\") pod \"collect-profiles-29500470-ncqjg\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.916353 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfmb7\" (UniqueName: \"kubernetes.io/projected/6bedd3e9-4212-4b9b-866a-a473d7f1c632-kube-api-access-tfmb7\") pod \"migrator-59844c95c7-r92c5\" (UID: \"6bedd3e9-4212-4b9b-866a-a473d7f1c632\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.935157 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:22 crc kubenswrapper[4845]: E0202 10:34:22.935623 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.435607872 +0000 UTC m=+144.527009322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.941035 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cqbz\" (UniqueName: \"kubernetes.io/projected/ccb6c76e-4ee2-4dcc-91a9-c91e25299780-kube-api-access-6cqbz\") pod \"package-server-manager-789f6589d5-s7qmj\" (UID: \"ccb6c76e-4ee2-4dcc-91a9-c91e25299780\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.943630 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.954725 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf"] Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.959545 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hl8d\" (UniqueName: \"kubernetes.io/projected/82fb9557-bbfb-42e4-ba6c-522685082e66-kube-api-access-2hl8d\") pod \"ingress-canary-mgqnl\" (UID: \"82fb9557-bbfb-42e4-ba6c-522685082e66\") " pod="openshift-ingress-canary/ingress-canary-mgqnl" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.990664 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk7lz\" (UniqueName: \"kubernetes.io/projected/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-kube-api-access-mk7lz\") pod \"marketplace-operator-79b997595-hr44j\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:22 crc kubenswrapper[4845]: I0202 10:34:22.996349 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x7f5\" (UniqueName: \"kubernetes.io/projected/b7a64280-2fd1-4149-826a-1f0daed66dc1-kube-api-access-9x7f5\") pod \"dns-default-f2fvl\" (UID: \"b7a64280-2fd1-4149-826a-1f0daed66dc1\") " pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.012269 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhrk\" (UniqueName: \"kubernetes.io/projected/0b2a5cbc-1208-4d37-be25-4d333adfb8f6-kube-api-access-9mhrk\") pod \"machine-config-server-wc6bh\" (UID: \"0b2a5cbc-1208-4d37-be25-4d333adfb8f6\") " pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.020337 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.030895 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xghn\" (UniqueName: \"kubernetes.io/projected/54f66031-6300-4334-8a24-bfe02897b467-kube-api-access-7xghn\") pod \"csi-hostpathplugin-wsz25\" (UID: \"54f66031-6300-4334-8a24-bfe02897b467\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.036390 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.036711 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.536697312 +0000 UTC m=+144.628098762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.037837 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.046584 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.052792 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjf2\" (UniqueName: \"kubernetes.io/projected/028dfe05-0d8f-4d6f-b5f4-af641b911b52-kube-api-access-chjf2\") pod \"service-ca-9c57cc56f-vq42n\" (UID: \"028dfe05-0d8f-4d6f-b5f4-af641b911b52\") " pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.060197 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.069423 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x654k\" (UniqueName: \"kubernetes.io/projected/a47109aa-f36b-4a01-89d4-832ff0a7a700-kube-api-access-x654k\") pod \"service-ca-operator-777779d784-fz66j\" (UID: \"a47109aa-f36b-4a01-89d4-832ff0a7a700\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.089830 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j97kh\" (UniqueName: \"kubernetes.io/projected/722bda9f-5a8b-4c83-8b1f-790da0003ce9-kube-api-access-j97kh\") pod \"control-plane-machine-set-operator-78cbb6b69f-c46tw\" (UID: \"722bda9f-5a8b-4c83-8b1f-790da0003ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.095038 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw"] Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.101450 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81ab0690_627e_4d43_b80c_3b3f96b06249.slice/crio-2b8a69512cfb9d9dd4b38eb8d78e0cce29c4dc9efcda9772c59b456a376af80d WatchSource:0}: Error finding container 2b8a69512cfb9d9dd4b38eb8d78e0cce29c4dc9efcda9772c59b456a376af80d: Status 404 returned error can't find the container with id 2b8a69512cfb9d9dd4b38eb8d78e0cce29c4dc9efcda9772c59b456a376af80d Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.103021 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ad81540_c66f_4f41_98a5_12aa607142fd.slice/crio-c8c1223c8850e3e25947a1ef508bacce0d25138ed4ae9fcde550f956bd922c8b WatchSource:0}: Error finding container c8c1223c8850e3e25947a1ef508bacce0d25138ed4ae9fcde550f956bd922c8b: Status 404 returned error can't find the container with id c8c1223c8850e3e25947a1ef508bacce0d25138ed4ae9fcde550f956bd922c8b Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.112162 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.118818 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a7dd651_1a0c_43b7_8c52_525200a7146c.slice/crio-6451af35a2b974a31dbc52d12a449d3f3d7c0b3e202be8504cf3c103e1bf1242 WatchSource:0}: Error finding container 6451af35a2b974a31dbc52d12a449d3f3d7c0b3e202be8504cf3c103e1bf1242: Status 404 returned error can't find the container with id 6451af35a2b974a31dbc52d12a449d3f3d7c0b3e202be8504cf3c103e1bf1242 Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.125318 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.135537 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xnxk\" (UniqueName: \"kubernetes.io/projected/f295e287-05b6-45e1-bfd5-3c71d7a87f15-kube-api-access-9xnxk\") pod \"packageserver-d55dfcdfc-c5c85\" (UID: \"f295e287-05b6-45e1-bfd5-3c71d7a87f15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.137195 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.137327 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.137651 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.637631228 +0000 UTC m=+144.729032738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.150112 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.161021 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.167374 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wc6bh" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.180153 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mgqnl" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.180349 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.198438 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wsz25" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.204595 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8gjpm"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.230555 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.238574 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.238857 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.738846762 +0000 UTC m=+144.830248212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.245852 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.264445 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2vcpg"] Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.287330 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04d41e42_423a_4bac_bc05_3c424c978fd8.slice/crio-983de89da05d00215d904c21a6798495b0919b3a61d3d8890b2f47a3f16bcb7f WatchSource:0}: Error finding container 983de89da05d00215d904c21a6798495b0919b3a61d3d8890b2f47a3f16bcb7f: Status 404 returned error can't find the container with id 983de89da05d00215d904c21a6798495b0919b3a61d3d8890b2f47a3f16bcb7f Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.319939 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f9a812_d62e_44ca_b83f_5f240ede92a0.slice/crio-8bcad2e78a62e68573e117efc6dc671b25a9e715a587b9e0fe3d8baefac4f023 WatchSource:0}: Error finding container 8bcad2e78a62e68573e117efc6dc671b25a9e715a587b9e0fe3d8baefac4f023: Status 404 returned error can't find the container with id 8bcad2e78a62e68573e117efc6dc671b25a9e715a587b9e0fe3d8baefac4f023 Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.339364 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.340043 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.840025704 +0000 UTC m=+144.931427154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.363760 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.383098 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.400054 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.442496 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.442546 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.443494 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.444115 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:23.944098211 +0000 UTC m=+145.035499671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.539159 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" event={"ID":"1eb84a90-bb90-4b5f-9e30-7415cb27cd39","Type":"ContainerStarted","Data":"557f339ef62a68f82cafbdec4d59ac3717807470c7e72a477ae014a95e0d4fa8"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.539211 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" event={"ID":"1eb84a90-bb90-4b5f-9e30-7415cb27cd39","Type":"ContainerStarted","Data":"95ffa56c127374c1d2cf12b892f47ee7784e1f0ddfa880bf6d39b092c0c0ab47"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.545711 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.545870 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.0458467 +0000 UTC m=+145.137248150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.548158 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.548573 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.048559729 +0000 UTC m=+145.139961179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.548687 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" event={"ID":"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9","Type":"ContainerStarted","Data":"5b6d1b889f30e64bb630226b12a8dec11c71a3e29ecb559bc71709271b41b56a"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.552245 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" event={"ID":"8dbf6657-96c2-472f-9e4c-0745a4c249be","Type":"ContainerStarted","Data":"bc7cad6e4a8caca58a375e3d6dbb0c656d2b38d5e89f85b77c6e6bd886778b12"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.558612 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" event={"ID":"d02c35e0-ade4-4316-b4da-88c6dd349220","Type":"ContainerStarted","Data":"2887a44009767064989b7afdb40a163bd61e4a12e17734f65b637e9f5ac66eb6"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.564782 4845 generic.go:334] "Generic (PLEG): container finished" podID="6de6b4aa-d335-4eb0-b880-7a21c9336ebf" containerID="91bd591227b78f9c1f0d5b5f4151094dc6149a69b81bc559948c2239bb684c96" exitCode=0 Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.564834 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" event={"ID":"6de6b4aa-d335-4eb0-b880-7a21c9336ebf","Type":"ContainerDied","Data":"91bd591227b78f9c1f0d5b5f4151094dc6149a69b81bc559948c2239bb684c96"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.564856 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" event={"ID":"6de6b4aa-d335-4eb0-b880-7a21c9336ebf","Type":"ContainerStarted","Data":"11ea871b8e34a0eef6934dd0d4b800fe4ce3f391c95fdd680371c20dfc11e944"} Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.566258 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5d758a8_6722_4c1b_be56_fe2bb6d27830.slice/crio-87b2943d2cc3d4173e627e67c4f05ab2812f4c96bb4e7fd389c8fdd788324d2b WatchSource:0}: Error finding container 87b2943d2cc3d4173e627e67c4f05ab2812f4c96bb4e7fd389c8fdd788324d2b: Status 404 returned error can't find the container with id 87b2943d2cc3d4173e627e67c4f05ab2812f4c96bb4e7fd389c8fdd788324d2b Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.567737 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" event={"ID":"f8a6e66d-c97e-43eb-8ff0-864e543f5488","Type":"ContainerStarted","Data":"47f118d00a342510dc1eb1714c5308c9aac6e9fef7bd7daefc6a0ad7f74e5995"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.578912 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.586934 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z9qjh"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.596415 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.601325 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" event={"ID":"0bc78651-e3a0-4988-acfa-89a6391f4aa5","Type":"ContainerStarted","Data":"8321bcfb26cb6f4f0dd226d6bce488ebee57d77db38a8e8691052e7c8969b3dc"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.602985 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.605693 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" event={"ID":"aa7bf903-1f8f-4d7c-b5a1-33a07160500f","Type":"ContainerStarted","Data":"e847ef4d6fd85f53465f7476e19b5522eebcea516d5ef0150abc2947c4be1ccc"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.627612 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" event={"ID":"1b5822e3-ccfc-4261-9a39-8f02356add90","Type":"ContainerStarted","Data":"e7e9916fbdb937e78d4a7ec3652add0eba916379930c8fe6e2b49871f597ea2f"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.638585 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" event={"ID":"d1f9a812-d62e-44ca-b83f-5f240ede92a0","Type":"ContainerStarted","Data":"8bcad2e78a62e68573e117efc6dc671b25a9e715a587b9e0fe3d8baefac4f023"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.639706 4845 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t62rp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.639747 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" podUID="0bc78651-e3a0-4988-acfa-89a6391f4aa5" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.644721 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" event={"ID":"ac875c91-285a-420b-9065-50af53ab50d3","Type":"ContainerStarted","Data":"1fcb883e8f41725d44677baa16381c121b0232bd5110fcb8fbbbde75a1dff506"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.645083 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.648661 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.648879 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.148859856 +0000 UTC m=+145.240261306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.648979 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.651305 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.151289816 +0000 UTC m=+145.242691266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.655068 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" event={"ID":"3a7dd651-1a0c-43b7-8c52-525200a7146c","Type":"ContainerStarted","Data":"6451af35a2b974a31dbc52d12a449d3f3d7c0b3e202be8504cf3c103e1bf1242"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.657362 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4rbqr" event={"ID":"a70e2a3d-9afe-4437-b9ef-fe175eee93d6","Type":"ContainerStarted","Data":"b9ba1eba59ccca8d0b46f34f7d248688065dda041c0ab8d79afac8d4f083e5d7"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.657382 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4rbqr" event={"ID":"a70e2a3d-9afe-4437-b9ef-fe175eee93d6","Type":"ContainerStarted","Data":"7479615bc5acb0d95dc873a7b6422c041138cc9e9f9895eb879626d491a81011"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.658058 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4rbqr" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.668390 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6szh7" podStartSLOduration=123.668370681 podStartE2EDuration="2m3.668370681s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:23.663978914 +0000 UTC m=+144.755380364" watchObservedRunningTime="2026-02-02 10:34:23.668370681 +0000 UTC m=+144.759772131" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.663863 4845 patch_prober.go:28] interesting pod/downloads-7954f5f757-4rbqr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.669062 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4rbqr" podUID="a70e2a3d-9afe-4437-b9ef-fe175eee93d6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.683850 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" event={"ID":"3ad81540-c66f-4f41-98a5-12aa607142fd","Type":"ContainerStarted","Data":"c8c1223c8850e3e25947a1ef508bacce0d25138ed4ae9fcde550f956bd922c8b"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.689546 4845 generic.go:334] "Generic (PLEG): container finished" podID="66ae9a2f-1c24-4a65-b961-bd9431c667f6" containerID="a24494154dda6f22ac9de519e84094a42d7649e179a60d5f4c7c098a52187a04" exitCode=0 Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.689594 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" event={"ID":"66ae9a2f-1c24-4a65-b961-bd9431c667f6","Type":"ContainerDied","Data":"a24494154dda6f22ac9de519e84094a42d7649e179a60d5f4c7c098a52187a04"} Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.692988 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fd9461a_3591_4e69_a9fd_2fd7de4d84cd.slice/crio-3b0e906a8843ecd4cf6e3849e6d3f269dfbf6989409b6ad8fb10a3cbc5ff1f7a WatchSource:0}: Error finding container 3b0e906a8843ecd4cf6e3849e6d3f269dfbf6989409b6ad8fb10a3cbc5ff1f7a: Status 404 returned error can't find the container with id 3b0e906a8843ecd4cf6e3849e6d3f269dfbf6989409b6ad8fb10a3cbc5ff1f7a Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.694067 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccb6c76e_4ee2_4dcc_91a9_c91e25299780.slice/crio-814d327cd7c7f0f1b425f19b9e9e4776ff7d9e8491a231c94d8778274b06a4ff WatchSource:0}: Error finding container 814d327cd7c7f0f1b425f19b9e9e4776ff7d9e8491a231c94d8778274b06a4ff: Status 404 returned error can't find the container with id 814d327cd7c7f0f1b425f19b9e9e4776ff7d9e8491a231c94d8778274b06a4ff Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.695624 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8gjpm" event={"ID":"04d41e42-423a-4bac-bc05-3c424c978fd8","Type":"ContainerStarted","Data":"983de89da05d00215d904c21a6798495b0919b3a61d3d8890b2f47a3f16bcb7f"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.706266 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" event={"ID":"81ab0690-627e-4d43-b80c-3b3f96b06249","Type":"ContainerStarted","Data":"2b8a69512cfb9d9dd4b38eb8d78e0cce29c4dc9efcda9772c59b456a376af80d"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.711774 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" event={"ID":"c29a9366-4664-4228-af51-b56b63c976b6","Type":"ContainerStarted","Data":"cc18f294854dbb97b7db62b4d8ad87760ce4257c1fd5ea150cda370bd9cdde14"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.711807 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" event={"ID":"c29a9366-4664-4228-af51-b56b63c976b6","Type":"ContainerStarted","Data":"f729fd0ab07975c64a8bf46c2b75ee8ebbbb431631f9eabfbd59b576e27c02a4"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.739069 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" event={"ID":"6bf70521-8fdf-400f-b7cd-d96b609b4783","Type":"ContainerStarted","Data":"8715d78959c283955471ec0450d5a36f2662bb391ddd5cb8291fa26867112699"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.739099 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" event={"ID":"6bf70521-8fdf-400f-b7cd-d96b609b4783","Type":"ContainerStarted","Data":"7688cc46374e131bdea5caaf1914a891f91babd65e68afda0e1d0ed55aa31e13"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.739111 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vq42n"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.744952 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rbhk2" event={"ID":"8291b32a-8322-4027-af13-cd9f10390406","Type":"ContainerStarted","Data":"1075bda810646faa2c991a9e04f27770b88a84303d9b9e36fd2dc8f540cb4d92"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.751026 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.752843 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.252825169 +0000 UTC m=+145.344226619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.770533 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" event={"ID":"98d47741-7063-487f-a38b-b9c398f3e07e","Type":"ContainerStarted","Data":"7e70d90a9fcf67e25e1301d6daeb2cdf5956e3a601c854abda0627c2816b60da"} Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.810245 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hr44j"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.854750 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.855929 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.355907477 +0000 UTC m=+145.447308927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.857649 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rvxdk"] Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.887508 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6szh7" Feb 02 10:34:23 crc kubenswrapper[4845]: W0202 10:34:23.889175 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe245fb2_4ef3_4642_aae0_14954ab28ffa.slice/crio-314e29ad8f84e0e7dd86789011160c80b856dada5fd6f295211f6b975ef5647d WatchSource:0}: Error finding container 314e29ad8f84e0e7dd86789011160c80b856dada5fd6f295211f6b975ef5647d: Status 404 returned error can't find the container with id 314e29ad8f84e0e7dd86789011160c80b856dada5fd6f295211f6b975ef5647d Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.959937 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:23 crc kubenswrapper[4845]: E0202 10:34:23.960605 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.460588852 +0000 UTC m=+145.551990302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:23 crc kubenswrapper[4845]: I0202 10:34:23.977735 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5"] Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.067745 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.068099 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.568088307 +0000 UTC m=+145.659489757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.106822 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-f2fvl"] Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.107283 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg"] Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.143950 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fz66j"] Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.146358 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsz25"] Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.168844 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.169214 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.669197908 +0000 UTC m=+145.760599358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.207655 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw"] Feb 02 10:34:24 crc kubenswrapper[4845]: W0202 10:34:24.235427 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7a64280_2fd1_4149_826a_1f0daed66dc1.slice/crio-a30b8f2c0a8db239b6d988e96744ca4d9e974c39638b428b14f8c460b33bdb8e WatchSource:0}: Error finding container a30b8f2c0a8db239b6d988e96744ca4d9e974c39638b428b14f8c460b33bdb8e: Status 404 returned error can't find the container with id a30b8f2c0a8db239b6d988e96744ca4d9e974c39638b428b14f8c460b33bdb8e Feb 02 10:34:24 crc kubenswrapper[4845]: W0202 10:34:24.253818 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54f66031_6300_4334_8a24_bfe02897b467.slice/crio-8a704df30240b126ce68777f74cfd7d8b364c39667844dfaba4bffb00e186fae WatchSource:0}: Error finding container 8a704df30240b126ce68777f74cfd7d8b364c39667844dfaba4bffb00e186fae: Status 404 returned error can't find the container with id 8a704df30240b126ce68777f74cfd7d8b364c39667844dfaba4bffb00e186fae Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.270355 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.270687 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.77067643 +0000 UTC m=+145.862077880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.347685 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.351975 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:24 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:24 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:24 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.352020 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.374431 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.374551 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.874510909 +0000 UTC m=+145.965912369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.374692 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.375056 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.875045835 +0000 UTC m=+145.966447285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.403579 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.485203 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.485817 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:24.985799215 +0000 UTC m=+146.077200665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.524125 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mgqnl"] Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.533767 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85"] Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.586593 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.586992 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.086977598 +0000 UTC m=+146.178379058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.597369 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" podStartSLOduration=123.597346938 podStartE2EDuration="2m3.597346938s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:24.596056231 +0000 UTC m=+145.687457711" watchObservedRunningTime="2026-02-02 10:34:24.597346938 +0000 UTC m=+145.688748398" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.645899 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4rbqr" podStartSLOduration=124.645860464 podStartE2EDuration="2m4.645860464s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:24.64363649 +0000 UTC m=+145.735037940" watchObservedRunningTime="2026-02-02 10:34:24.645860464 +0000 UTC m=+145.737261914" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.668115 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xk8gn" podStartSLOduration=123.668099579 podStartE2EDuration="2m3.668099579s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:24.666526123 +0000 UTC m=+145.757927603" watchObservedRunningTime="2026-02-02 10:34:24.668099579 +0000 UTC m=+145.759501029" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.695253 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.695637 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.195623407 +0000 UTC m=+146.287024857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.751537 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rbhk2" podStartSLOduration=123.751519387 podStartE2EDuration="2m3.751519387s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:24.748987743 +0000 UTC m=+145.840389193" watchObservedRunningTime="2026-02-02 10:34:24.751519387 +0000 UTC m=+145.842920837" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.795383 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8gjpm" event={"ID":"04d41e42-423a-4bac-bc05-3c424c978fd8","Type":"ContainerStarted","Data":"966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.796479 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.796924 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.296911813 +0000 UTC m=+146.388313263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.808839 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" event={"ID":"daa4c1cf-5cd2-4dba-8ddb-543a716a4628","Type":"ContainerStarted","Data":"3de7c534b24a3dec680dcab862d65cd483c1db000fa370db2efab2e3e6767386"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.808907 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" event={"ID":"daa4c1cf-5cd2-4dba-8ddb-543a716a4628","Type":"ContainerStarted","Data":"829cda8afcfa6e37a62d80832e5473a03e1782a9812f40517469d21c1208c661"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.826074 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" event={"ID":"f8a6e66d-c97e-43eb-8ff0-864e543f5488","Type":"ContainerStarted","Data":"fd8b74886f7593d6c5904904a6a2172128ea7c3d810ed91683593d31c4cdd201"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.849134 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" podStartSLOduration=123.849119986 podStartE2EDuration="2m3.849119986s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:24.787264183 +0000 UTC m=+145.878665643" watchObservedRunningTime="2026-02-02 10:34:24.849119986 +0000 UTC m=+145.940521436" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.851485 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz25" event={"ID":"54f66031-6300-4334-8a24-bfe02897b467","Type":"ContainerStarted","Data":"8a704df30240b126ce68777f74cfd7d8b364c39667844dfaba4bffb00e186fae"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.897858 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" event={"ID":"be245fb2-4ef3-4642-aae0-14954ab28ffa","Type":"ContainerStarted","Data":"314e29ad8f84e0e7dd86789011160c80b856dada5fd6f295211f6b975ef5647d"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.899135 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:24 crc kubenswrapper[4845]: E0202 10:34:24.899468 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.399436514 +0000 UTC m=+146.490837964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.921161 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" event={"ID":"2b15a9aa-d443-4058-8e02-0f0eafbd7dd9","Type":"ContainerStarted","Data":"0930e530cc937d4a8c4fcbcb2b9c11527383b8973b22bb7ef168b7c76196d1f6"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.923774 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rch6p" podStartSLOduration=124.923754469 podStartE2EDuration="2m4.923754469s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:24.920455814 +0000 UTC m=+146.011857264" watchObservedRunningTime="2026-02-02 10:34:24.923754469 +0000 UTC m=+146.015155919" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.930763 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" event={"ID":"6de6b4aa-d335-4eb0-b880-7a21c9336ebf","Type":"ContainerStarted","Data":"ac061238d93eaceea2c3c153ac595f65ac8da4d25d1d4e6fba424a5c25dee01b"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.931455 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.935586 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" event={"ID":"0bc78651-e3a0-4988-acfa-89a6391f4aa5","Type":"ContainerStarted","Data":"0833a582c1546aeb6322237fde9c5e15066b8c7aa72d979f3b37512bdf6cf54b"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.947510 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" event={"ID":"6bedd3e9-4212-4b9b-866a-a473d7f1c632","Type":"ContainerStarted","Data":"818e18a70080750b6efc8d22b53b542d100d5c445a3ff5d753ecdc32a2f77394"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.959950 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t62rp" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.961521 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8gjpm" podStartSLOduration=124.961510464 podStartE2EDuration="2m4.961510464s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:24.955025796 +0000 UTC m=+146.046427246" watchObservedRunningTime="2026-02-02 10:34:24.961510464 +0000 UTC m=+146.052911914" Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.968165 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" event={"ID":"81ab0690-627e-4d43-b80c-3b3f96b06249","Type":"ContainerStarted","Data":"15544f89f02f2d0c7bc9f81b6b0bf6862f621c35cad77beee5e5aca14d31c82c"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.987762 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" event={"ID":"aa7bf903-1f8f-4d7c-b5a1-33a07160500f","Type":"ContainerStarted","Data":"000968c3aaeba68df1e5aa3e39dce40cc677ba4ca087fc6e592bb5257d50c063"} Feb 02 10:34:24 crc kubenswrapper[4845]: I0202 10:34:24.990641 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mgqnl" event={"ID":"82fb9557-bbfb-42e4-ba6c-522685082e66","Type":"ContainerStarted","Data":"04ea9b2c57c8554b0c87b059c86647c80058d6f975f895eeaf19b901ed4d8a07"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.004418 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" event={"ID":"e903551f-3d78-4de4-a08a-ce9ea234942c","Type":"ContainerStarted","Data":"969afdc837fa269d9afb2c4d0b10e8fedcbb33e325a20f2dca14132d0a8bcb66"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.004470 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" event={"ID":"e903551f-3d78-4de4-a08a-ce9ea234942c","Type":"ContainerStarted","Data":"36711d5d688314e1c7a50c51aa64c94a7e064c0ee700e52fa409c4f4c1c90839"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.005184 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.005572 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.50555732 +0000 UTC m=+146.596958770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.035802 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pt97w" podStartSLOduration=125.035781506 podStartE2EDuration="2m5.035781506s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:24.985489529 +0000 UTC m=+146.076890979" watchObservedRunningTime="2026-02-02 10:34:25.035781506 +0000 UTC m=+146.127182956" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.066478 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" event={"ID":"5bcf9211-edc2-4706-a9ac-b5f38b856186","Type":"ContainerStarted","Data":"c4dd4a396199592b0a6e6a2eac4ea47d74e443293291f751e0d5d35a972b367d"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.066543 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" event={"ID":"5bcf9211-edc2-4706-a9ac-b5f38b856186","Type":"ContainerStarted","Data":"894d57605e7e68cc219bd424e8cd3a2a5de95835d8d798f779b61c0f5123fa5e"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.069134 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" podStartSLOduration=125.069093922 podStartE2EDuration="2m5.069093922s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.037697752 +0000 UTC m=+146.129099202" watchObservedRunningTime="2026-02-02 10:34:25.069093922 +0000 UTC m=+146.160495372" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.076595 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rfh2j" podStartSLOduration=125.076580149 podStartE2EDuration="2m5.076580149s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.064474228 +0000 UTC m=+146.155875688" watchObservedRunningTime="2026-02-02 10:34:25.076580149 +0000 UTC m=+146.167981599" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.106786 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.108355 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-25kbf" podStartSLOduration=124.108334879 podStartE2EDuration="2m4.108334879s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.105025463 +0000 UTC m=+146.196426913" watchObservedRunningTime="2026-02-02 10:34:25.108334879 +0000 UTC m=+146.199736329" Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.109064 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.60904212 +0000 UTC m=+146.700443580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.137380 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wc6bh" event={"ID":"0b2a5cbc-1208-4d37-be25-4d333adfb8f6","Type":"ContainerStarted","Data":"af2aeeb6d16df5b83775ee46aa1e1142963c553ad5b8907efe73aa04c58538c7"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.137439 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wc6bh" event={"ID":"0b2a5cbc-1208-4d37-be25-4d333adfb8f6","Type":"ContainerStarted","Data":"6d307057403aa8f48e333b18180cfb605fcdbfa9e2931d1077fae8d5e6a4012a"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.146508 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" event={"ID":"f295e287-05b6-45e1-bfd5-3c71d7a87f15","Type":"ContainerStarted","Data":"102c0ff0c549b327001c7a5305c3fbae61954daed1b57772e0162dd791a156d2"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.158605 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" event={"ID":"3a7dd651-1a0c-43b7-8c52-525200a7146c","Type":"ContainerStarted","Data":"44acb71bf2d8d0256e9c17b725034395d88f81900558083e9d9ea694b3f8010d"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.212374 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.213591 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" event={"ID":"ccb6c76e-4ee2-4dcc-91a9-c91e25299780","Type":"ContainerStarted","Data":"0c742ea686f9e530518fd01a249f2ffc756ee0ec7968b6d68e641ebe68bbdb82"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.213626 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" event={"ID":"ccb6c76e-4ee2-4dcc-91a9-c91e25299780","Type":"ContainerStarted","Data":"814d327cd7c7f0f1b425f19b9e9e4776ff7d9e8491a231c94d8778274b06a4ff"} Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.213676 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.713660222 +0000 UTC m=+146.805061752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.215010 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wc6bh" podStartSLOduration=6.214993171 podStartE2EDuration="6.214993171s" podCreationTimestamp="2026-02-02 10:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.20668874 +0000 UTC m=+146.298090190" watchObservedRunningTime="2026-02-02 10:34:25.214993171 +0000 UTC m=+146.306394621" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.229058 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f4gzw" podStartSLOduration=124.229035538 podStartE2EDuration="2m4.229035538s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.227784141 +0000 UTC m=+146.319185591" watchObservedRunningTime="2026-02-02 10:34:25.229035538 +0000 UTC m=+146.320436988" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.238223 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" event={"ID":"028dfe05-0d8f-4d6f-b5f4-af641b911b52","Type":"ContainerStarted","Data":"28809ff5e84978fe7b40548668775a4193032fc3227abda1f378e09172539205"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.251084 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" event={"ID":"ee590ca4-c2f6-4dcf-973d-df26701d689f","Type":"ContainerStarted","Data":"ad4ea7f0dae6a93207b9d95af04c1479e13b2c9c74a9493efcae48a316941e9f"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.251122 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" event={"ID":"ee590ca4-c2f6-4dcf-973d-df26701d689f","Type":"ContainerStarted","Data":"334f628fe2fe677e548098919d50f4b0a6a2219adbb3c5e601c061ff9223d142"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.253496 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" event={"ID":"d2ddc114-bfc4-444f-aeb3-8d43d95bec09","Type":"ContainerStarted","Data":"a4725b8556aef79f1893fb492aa385b87083299f9e257b7881345d1bcb5c2f73"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.254341 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.259726 4845 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hr44j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.259782 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" podUID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.263799 4845 generic.go:334] "Generic (PLEG): container finished" podID="8dbf6657-96c2-472f-9e4c-0745a4c249be" containerID="6fe42116e9a251c4ad584597f07499c48bc249afad969ff74a8a21e23add0298" exitCode=0 Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.263870 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" event={"ID":"8dbf6657-96c2-472f-9e4c-0745a4c249be","Type":"ContainerDied","Data":"6fe42116e9a251c4ad584597f07499c48bc249afad969ff74a8a21e23add0298"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.286845 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" podStartSLOduration=124.286826623 podStartE2EDuration="2m4.286826623s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.285412202 +0000 UTC m=+146.376813652" watchObservedRunningTime="2026-02-02 10:34:25.286826623 +0000 UTC m=+146.378228073" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.304321 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" event={"ID":"3ad81540-c66f-4f41-98a5-12aa607142fd","Type":"ContainerStarted","Data":"a810ce009a55cd2c040663024b3101546b89da6e3681e8d1428597eedb73e851"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.306526 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.315301 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.316500 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.816485233 +0000 UTC m=+146.907886683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.318179 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" event={"ID":"98d47741-7063-487f-a38b-b9c398f3e07e","Type":"ContainerStarted","Data":"93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.320029 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.322306 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" podStartSLOduration=124.322282431 podStartE2EDuration="2m4.322282431s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.318781339 +0000 UTC m=+146.410182789" watchObservedRunningTime="2026-02-02 10:34:25.322282431 +0000 UTC m=+146.413683881" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.332748 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.337376 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" event={"ID":"d1f9a812-d62e-44ca-b83f-5f240ede92a0","Type":"ContainerStarted","Data":"ac7cd0233c2a1454cbee42a05b5ad14623f0234f364d71ae7f42e532bf7fb259"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.362780 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-m55h8" podStartSLOduration=125.362764254 podStartE2EDuration="2m5.362764254s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.362609809 +0000 UTC m=+146.454011259" watchObservedRunningTime="2026-02-02 10:34:25.362764254 +0000 UTC m=+146.454165704" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.365954 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:25 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:25 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:25 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.366005 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.382027 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" event={"ID":"d02c35e0-ade4-4316-b4da-88c6dd349220","Type":"ContainerStarted","Data":"288f218b1dcf5bbec815dfab3c8f5edf4f1b5807cf1cd6745b0be226db11a9e8"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.403004 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" event={"ID":"722bda9f-5a8b-4c83-8b1f-790da0003ce9","Type":"ContainerStarted","Data":"42f5ad96602eb5e50b5958e8e21f18aba8a956169763527c2ae2ef527cf9b0da"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.419263 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.420759 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:25.920744325 +0000 UTC m=+147.012145775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.462094 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f2fvl" event={"ID":"b7a64280-2fd1-4149-826a-1f0daed66dc1","Type":"ContainerStarted","Data":"a30b8f2c0a8db239b6d988e96744ca4d9e974c39638b428b14f8c460b33bdb8e"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.496016 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" event={"ID":"1b5822e3-ccfc-4261-9a39-8f02356add90","Type":"ContainerStarted","Data":"adf72df2145a59d69648b8ba66489c83ea3180983f62d0d8950b8d38cb88f311"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.503554 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" podStartSLOduration=125.503539204 podStartE2EDuration="2m5.503539204s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.470789565 +0000 UTC m=+146.562191005" watchObservedRunningTime="2026-02-02 10:34:25.503539204 +0000 UTC m=+146.594940654" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.515140 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" event={"ID":"a47109aa-f36b-4a01-89d4-832ff0a7a700","Type":"ContainerStarted","Data":"a73eac9f6daf9c64976bdda1cc87e3394ca15671f86625d4403db62a979d75d3"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.525545 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.525848 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.025836341 +0000 UTC m=+147.117237791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.541247 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" event={"ID":"b5d758a8-6722-4c1b-be56-fe2bb6d27830","Type":"ContainerStarted","Data":"87b2943d2cc3d4173e627e67c4f05ab2812f4c96bb4e7fd389c8fdd788324d2b"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.543108 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zznfs" podStartSLOduration=124.543091601 podStartE2EDuration="2m4.543091601s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.510024782 +0000 UTC m=+146.601426232" watchObservedRunningTime="2026-02-02 10:34:25.543091601 +0000 UTC m=+146.634493051" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.566972 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" event={"ID":"30bde55e-4121-4b71-b6f4-6cb3a9acd82e","Type":"ContainerStarted","Data":"784d840cf0db3fd4333a0324bf44adc6d663afbb97888004d15f7427f54cbe89"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.576220 4845 csr.go:261] certificate signing request csr-jvq7s is approved, waiting to be issued Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.588625 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" event={"ID":"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd","Type":"ContainerStarted","Data":"3b0e906a8843ecd4cf6e3849e6d3f269dfbf6989409b6ad8fb10a3cbc5ff1f7a"} Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.588756 4845 patch_prober.go:28] interesting pod/downloads-7954f5f757-4rbqr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.588801 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4rbqr" podUID="a70e2a3d-9afe-4437-b9ef-fe175eee93d6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.589309 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.591021 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bl6zg" podStartSLOduration=126.59100907 podStartE2EDuration="2m6.59100907s" podCreationTimestamp="2026-02-02 10:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.544158342 +0000 UTC m=+146.635559812" watchObservedRunningTime="2026-02-02 10:34:25.59100907 +0000 UTC m=+146.682410520" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.607079 4845 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z9qjh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.607128 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" podUID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.607815 4845 csr.go:257] certificate signing request csr-jvq7s is issued Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.629047 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.633738 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.133714828 +0000 UTC m=+147.225116278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.640123 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" podStartSLOduration=124.640107623 podStartE2EDuration="2m4.640107623s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.589093014 +0000 UTC m=+146.680494464" watchObservedRunningTime="2026-02-02 10:34:25.640107623 +0000 UTC m=+146.731509073" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.643338 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" podStartSLOduration=124.643308746 podStartE2EDuration="2m4.643308746s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.63864654 +0000 UTC m=+146.730047990" watchObservedRunningTime="2026-02-02 10:34:25.643308746 +0000 UTC m=+146.734710196" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.681059 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" podStartSLOduration=124.681042469 podStartE2EDuration="2m4.681042469s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.675834378 +0000 UTC m=+146.767235828" watchObservedRunningTime="2026-02-02 10:34:25.681042469 +0000 UTC m=+146.772443919" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.702862 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvp5t" podStartSLOduration=125.702847371 podStartE2EDuration="2m5.702847371s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.702205003 +0000 UTC m=+146.793606453" watchObservedRunningTime="2026-02-02 10:34:25.702847371 +0000 UTC m=+146.794248821" Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.767446 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.767876 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.267854876 +0000 UTC m=+147.359256326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.868873 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.869283 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.369268494 +0000 UTC m=+147.460669944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:25 crc kubenswrapper[4845]: I0202 10:34:25.971475 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:25 crc kubenswrapper[4845]: E0202 10:34:25.972092 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.472078174 +0000 UTC m=+147.563479614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.074004 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.074438 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.574418231 +0000 UTC m=+147.665819781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.175686 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.175898 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.675851931 +0000 UTC m=+147.767253391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.176040 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.176451 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.676434697 +0000 UTC m=+147.767836137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.276663 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.276871 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.776846048 +0000 UTC m=+147.868247498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.277115 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.277462 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.777453436 +0000 UTC m=+147.868854886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.320259 4845 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-w989s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.320321 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" podUID="98d47741-7063-487f-a38b-b9c398f3e07e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.352957 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:26 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:26 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:26 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.353059 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.378457 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.378671 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.878643639 +0000 UTC m=+147.970045099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.378803 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.379206 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.879194294 +0000 UTC m=+147.970595744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.480371 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.480579 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.980553532 +0000 UTC m=+148.071954982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.481037 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.481448 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:26.981437788 +0000 UTC m=+148.072839318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.582057 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.582295 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.08225261 +0000 UTC m=+148.173654060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.582362 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.582674 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.082661572 +0000 UTC m=+148.174063022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.594213 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vq42n" event={"ID":"028dfe05-0d8f-4d6f-b5f4-af641b911b52","Type":"ContainerStarted","Data":"5b64f630425ff95eb7934e0cf5c431541b5bd437362a1587c9171fd6b89d35c4"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.595715 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" event={"ID":"f8a6e66d-c97e-43eb-8ff0-864e543f5488","Type":"ContainerStarted","Data":"b5a5b0340d9d607f7850ae86c5554d19fb4668620ab068f6ee45e21c5946b57b"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.597658 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" event={"ID":"8dbf6657-96c2-472f-9e4c-0745a4c249be","Type":"ContainerStarted","Data":"c7ff3b764b789d897b9bfd196a743b3b228345d2a5a279295f9f622a525bd1d7"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.599523 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" event={"ID":"66ae9a2f-1c24-4a65-b961-bd9431c667f6","Type":"ContainerStarted","Data":"d93abf78c7f8a52572409ae932a6094ae129c4a95b7589d23cbb0ecf2406399c"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.599565 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" event={"ID":"66ae9a2f-1c24-4a65-b961-bd9431c667f6","Type":"ContainerStarted","Data":"52ab081bae665b609acc8b914a4691d13bdfc6cd70a3336411e22ca4912a0bda"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.601047 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m9h2g" event={"ID":"1b5822e3-ccfc-4261-9a39-8f02356add90","Type":"ContainerStarted","Data":"d324e596860192125ed4a47197cbde148a2f7bf8eda70f9f83c6374b1bb45a93"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.602755 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fz66j" event={"ID":"a47109aa-f36b-4a01-89d4-832ff0a7a700","Type":"ContainerStarted","Data":"5298701e10681b445590c289fa3baa5f6316b6015c9fcb07a88c831aa3431d9f"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.604524 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" event={"ID":"d2ddc114-bfc4-444f-aeb3-8d43d95bec09","Type":"ContainerStarted","Data":"6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.605238 4845 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hr44j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.605291 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" podUID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.605996 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" event={"ID":"30bde55e-4121-4b71-b6f4-6cb3a9acd82e","Type":"ContainerStarted","Data":"523a5bcb42e2050e61412c7d355eedcd8752caa8ebcb7f962918e3ce965038aa"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.607361 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" event={"ID":"f295e287-05b6-45e1-bfd5-3c71d7a87f15","Type":"ContainerStarted","Data":"2032d8132587b0205ba2a042609961f10d7df1a4db30f776d5ae33d8f3213dbb"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.607518 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.608531 4845 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-c5c85 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.608604 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 10:29:25 +0000 UTC, rotation deadline is 2026-11-01 18:32:46.197899056 +0000 UTC Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.608650 4845 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6535h58m19.589251181s for next certificate rotation Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.608643 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" podUID="f295e287-05b6-45e1-bfd5-3c71d7a87f15" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.609632 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f2fvl" event={"ID":"b7a64280-2fd1-4149-826a-1f0daed66dc1","Type":"ContainerStarted","Data":"ca70ec0f75ce8b4bd33b625b6da7e53c1506ec0e88616865dd0f11fbb10f6045"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.609668 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.609681 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f2fvl" event={"ID":"b7a64280-2fd1-4149-826a-1f0daed66dc1","Type":"ContainerStarted","Data":"251d4707445d8f9e2de4c7325fbce1e984410b11da18dc7756bd01d8d7fbd776"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.610878 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mgqnl" event={"ID":"82fb9557-bbfb-42e4-ba6c-522685082e66","Type":"ContainerStarted","Data":"d7d86aeb9105f2263c176d1c04cc1ecd64b08201304551ffd25cb8509d88e94d"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.612576 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" event={"ID":"d1f9a812-d62e-44ca-b83f-5f240ede92a0","Type":"ContainerStarted","Data":"84f1faaa8fffaf568a69864a03da17d4204af6eca21c94c1ab201ae2b147577c"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.613978 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz25" event={"ID":"54f66031-6300-4334-8a24-bfe02897b467","Type":"ContainerStarted","Data":"8d6b39cd6b039eb05d9bbac46938448f581611d70cd55520093739ba70b6e48e"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.615515 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" event={"ID":"722bda9f-5a8b-4c83-8b1f-790da0003ce9","Type":"ContainerStarted","Data":"b6977f46f27851e30a28109002704ebc9f01c462d2861c8d4d416ea751cbe172"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.618407 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" event={"ID":"6bedd3e9-4212-4b9b-866a-a473d7f1c632","Type":"ContainerStarted","Data":"cf7cbfb99fbaf34715fba99e46da3e1b5975f03ca13e661dec291e428d188638"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.618477 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" event={"ID":"6bedd3e9-4212-4b9b-866a-a473d7f1c632","Type":"ContainerStarted","Data":"d3248cf068b81a405fb112e6bfaa70139056a8dc42811ffae71a01a06f787453"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.619801 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" event={"ID":"ccb6c76e-4ee2-4dcc-91a9-c91e25299780","Type":"ContainerStarted","Data":"ebe12fc2940989d5d15879bf3386fef5893ce44c58cd002dd7b84b485874cdcc"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.619928 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.621124 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" event={"ID":"e903551f-3d78-4de4-a08a-ce9ea234942c","Type":"ContainerStarted","Data":"806517c86b6a5019159fa17c452bf029de4b7dcb535dec694d01f62269e0cdac"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.624389 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" event={"ID":"5bcf9211-edc2-4706-a9ac-b5f38b856186","Type":"ContainerStarted","Data":"727c4d0934941ab59591696077c9c67e22746af476c0646616af2b3146abd3bb"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.625478 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" event={"ID":"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd","Type":"ContainerStarted","Data":"f9b9887e548d54bb14e6cb1ffbf2477aee082f38d18a932c8121b4adcb811fca"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.626002 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8btfx" podStartSLOduration=126.625986568 podStartE2EDuration="2m6.625986568s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.624397962 +0000 UTC m=+147.715799412" watchObservedRunningTime="2026-02-02 10:34:26.625986568 +0000 UTC m=+147.717388018" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.626291 4845 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z9qjh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.626322 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" podUID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.626799 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qmvcn" event={"ID":"b5d758a8-6722-4c1b-be56-fe2bb6d27830","Type":"ContainerStarted","Data":"1e8a96abdb190bfa53c7b46f82b607b3ccfbde111881cbba498bc5fa6b956156"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.626981 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" podStartSLOduration=126.626974336 podStartE2EDuration="2m6.626974336s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:25.737052843 +0000 UTC m=+146.828454283" watchObservedRunningTime="2026-02-02 10:34:26.626974336 +0000 UTC m=+147.718375786" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.629120 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" event={"ID":"be245fb2-4ef3-4642-aae0-14954ab28ffa","Type":"ContainerStarted","Data":"5f5dc25f1124301e18535afe7eb535af68b37b6a2b8da08b80ae3135e606d9c2"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.629148 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" event={"ID":"be245fb2-4ef3-4642-aae0-14954ab28ffa","Type":"ContainerStarted","Data":"59ea93e07ef4be9fc4d6c2fe422b2647eb90fd79fcdd6ada749bcdfcc72b5549"} Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.652550 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.662512 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" podStartSLOduration=125.662494666 podStartE2EDuration="2m5.662494666s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.661619921 +0000 UTC m=+147.753021371" watchObservedRunningTime="2026-02-02 10:34:26.662494666 +0000 UTC m=+147.753896116" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.681946 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-f2fvl" podStartSLOduration=6.681930689 podStartE2EDuration="6.681930689s" podCreationTimestamp="2026-02-02 10:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.680743395 +0000 UTC m=+147.772144835" watchObservedRunningTime="2026-02-02 10:34:26.681930689 +0000 UTC m=+147.773332139" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.683377 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.683509 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.183490925 +0000 UTC m=+148.274892375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.684240 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.686142 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.186130721 +0000 UTC m=+148.277532171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.713692 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qx7sb" podStartSLOduration=125.713674549 podStartE2EDuration="2m5.713674549s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.713559886 +0000 UTC m=+147.804961336" watchObservedRunningTime="2026-02-02 10:34:26.713674549 +0000 UTC m=+147.805075999" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.735431 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2vcpg" podStartSLOduration=126.73541614 podStartE2EDuration="2m6.73541614s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.733677109 +0000 UTC m=+147.825078549" watchObservedRunningTime="2026-02-02 10:34:26.73541614 +0000 UTC m=+147.826817590" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.755992 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c46tw" podStartSLOduration=125.755975226 podStartE2EDuration="2m5.755975226s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.755157222 +0000 UTC m=+147.846558682" watchObservedRunningTime="2026-02-02 10:34:26.755975226 +0000 UTC m=+147.847376676" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.788161 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.792204 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.292177075 +0000 UTC m=+148.383578535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.808343 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" podStartSLOduration=125.808324813 podStartE2EDuration="2m5.808324813s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.773556545 +0000 UTC m=+147.864957995" watchObservedRunningTime="2026-02-02 10:34:26.808324813 +0000 UTC m=+147.899726273" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.808961 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" podStartSLOduration=125.808954471 podStartE2EDuration="2m5.808954471s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.807601432 +0000 UTC m=+147.899002902" watchObservedRunningTime="2026-02-02 10:34:26.808954471 +0000 UTC m=+147.900355921" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.852162 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-h92cr" podStartSLOduration=126.852144073 podStartE2EDuration="2m6.852144073s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.848321672 +0000 UTC m=+147.939723122" watchObservedRunningTime="2026-02-02 10:34:26.852144073 +0000 UTC m=+147.943545523" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.887188 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" podStartSLOduration=125.887165698 podStartE2EDuration="2m5.887165698s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.886915131 +0000 UTC m=+147.978316581" watchObservedRunningTime="2026-02-02 10:34:26.887165698 +0000 UTC m=+147.978567148" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.891071 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.891356 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.391344379 +0000 UTC m=+148.482745829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.913231 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mgqnl" podStartSLOduration=6.913213953 podStartE2EDuration="6.913213953s" podCreationTimestamp="2026-02-02 10:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.911563635 +0000 UTC m=+148.002965085" watchObservedRunningTime="2026-02-02 10:34:26.913213953 +0000 UTC m=+148.004615403" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.975870 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r92c5" podStartSLOduration=125.975853029 podStartE2EDuration="2m5.975853029s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.974115378 +0000 UTC m=+148.065516828" watchObservedRunningTime="2026-02-02 10:34:26.975853029 +0000 UTC m=+148.067254479" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.978333 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" podStartSLOduration=126.97832182 podStartE2EDuration="2m6.97832182s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:26.946445376 +0000 UTC m=+148.037846826" watchObservedRunningTime="2026-02-02 10:34:26.97832182 +0000 UTC m=+148.069723270" Feb 02 10:34:26 crc kubenswrapper[4845]: I0202 10:34:26.992950 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:26 crc kubenswrapper[4845]: E0202 10:34:26.993492 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.49347405 +0000 UTC m=+148.584875510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.027228 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ns95h" podStartSLOduration=126.027195127 podStartE2EDuration="2m6.027195127s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:27.024519509 +0000 UTC m=+148.115920959" watchObservedRunningTime="2026-02-02 10:34:27.027195127 +0000 UTC m=+148.118596577" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.057080 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rvxdk" podStartSLOduration=126.057057373 podStartE2EDuration="2m6.057057373s" podCreationTimestamp="2026-02-02 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:27.055132507 +0000 UTC m=+148.146533947" watchObservedRunningTime="2026-02-02 10:34:27.057057373 +0000 UTC m=+148.148458823" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.082592 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.082936 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.083624 4845 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xsdsh container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.083661 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" podUID="66ae9a2f-1c24-4a65-b961-bd9431c667f6" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.096532 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.096826 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.596816225 +0000 UTC m=+148.688217675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.190740 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-srnzq"] Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.191627 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.195283 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.200732 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.200994 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.700971934 +0000 UTC m=+148.792373394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.201141 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.201503 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.701494349 +0000 UTC m=+148.792895809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.229050 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-srnzq"] Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.302650 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.302848 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.802822516 +0000 UTC m=+148.894223966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.302922 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.302968 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-catalog-content\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.303004 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-utilities\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.303024 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55ffd\" (UniqueName: \"kubernetes.io/projected/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-kube-api-access-55ffd\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.303275 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.803264749 +0000 UTC m=+148.894666259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.350809 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:27 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:27 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:27 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.350866 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.395583 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nqqx9"] Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.396572 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.398617 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.404531 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.404766 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-catalog-content\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.404814 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-utilities\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.404832 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55ffd\" (UniqueName: \"kubernetes.io/projected/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-kube-api-access-55ffd\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.405199 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:27.905183553 +0000 UTC m=+148.996585003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.405899 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-catalog-content\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.406087 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-utilities\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.420274 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqqx9"] Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.433378 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55ffd\" (UniqueName: \"kubernetes.io/projected/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-kube-api-access-55ffd\") pod \"community-operators-srnzq\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.506023 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-utilities\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.506647 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-catalog-content\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.506757 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.506897 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb5dl\" (UniqueName: \"kubernetes.io/projected/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-kube-api-access-mb5dl\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.507149 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.007111918 +0000 UTC m=+149.098513368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.515027 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.552036 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.552076 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.588693 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7pz84"] Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.589591 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.607931 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.608130 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.108083694 +0000 UTC m=+149.199485144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.608344 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-catalog-content\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.608451 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.608578 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5dl\" (UniqueName: \"kubernetes.io/projected/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-kube-api-access-mb5dl\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.608708 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-utilities\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.608797 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.108780734 +0000 UTC m=+149.200182184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.608835 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-catalog-content\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.609245 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-utilities\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.636873 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5dl\" (UniqueName: \"kubernetes.io/projected/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-kube-api-access-mb5dl\") pod \"certified-operators-nqqx9\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.652346 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7pz84"] Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.665259 4845 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hr44j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.665308 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" podUID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.665527 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz25" event={"ID":"54f66031-6300-4334-8a24-bfe02897b467","Type":"ContainerStarted","Data":"6b93665d738b5c7b6c7fbc76893ea096a3c10779063bcfaceb066fab2bc7a0c2"} Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.672216 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.727255 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.727553 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.727599 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.727637 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-utilities\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.727667 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpsx\" (UniqueName: \"kubernetes.io/projected/149fcd2d-91c2-493a-a1ec-c8675e1901ef-kube-api-access-nvpsx\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.727712 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-catalog-content\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.727875 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.727990 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.737266 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.737365 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.237341771 +0000 UTC m=+149.328743221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.740948 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.749321 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.752160 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.772214 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.787661 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-566gk"] Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.801768 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.829763 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.830122 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-utilities\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.830199 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpsx\" (UniqueName: \"kubernetes.io/projected/149fcd2d-91c2-493a-a1ec-c8675e1901ef-kube-api-access-nvpsx\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.830222 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-catalog-content\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.832991 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.332973253 +0000 UTC m=+149.424374823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.835690 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-utilities\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.838023 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-catalog-content\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.843475 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-566gk"] Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.884815 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpsx\" (UniqueName: \"kubernetes.io/projected/149fcd2d-91c2-493a-a1ec-c8675e1901ef-kube-api-access-nvpsx\") pod \"community-operators-7pz84\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.903238 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.932833 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.933080 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s42p\" (UniqueName: \"kubernetes.io/projected/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-kube-api-access-5s42p\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.933136 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-utilities\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.933206 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-catalog-content\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:27 crc kubenswrapper[4845]: E0202 10:34:27.933285 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.43327307 +0000 UTC m=+149.524674520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.942371 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.957997 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:34:27 crc kubenswrapper[4845]: I0202 10:34:27.975134 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.034583 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.034621 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-utilities\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.034673 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-catalog-content\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.034707 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s42p\" (UniqueName: \"kubernetes.io/projected/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-kube-api-access-5s42p\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.035203 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.535192114 +0000 UTC m=+149.626593564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.035637 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-utilities\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.035831 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-catalog-content\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.071624 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s42p\" (UniqueName: \"kubernetes.io/projected/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-kube-api-access-5s42p\") pod \"certified-operators-566gk\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.136352 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.136536 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.636521221 +0000 UTC m=+149.727922671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.136733 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.137019 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.637011995 +0000 UTC m=+149.728413445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.166699 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-srnzq"] Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.174574 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.237314 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.238007 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.737988762 +0000 UTC m=+149.829390222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.338926 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.339251 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.839236547 +0000 UTC m=+149.930637997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.440367 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.440544 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.940518522 +0000 UTC m=+150.031919972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.440632 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.440874 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:28.940866983 +0000 UTC m=+150.032268433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.541505 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.541675 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.041647144 +0000 UTC m=+150.133048594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.541735 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.542040 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.042028535 +0000 UTC m=+150.133429985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.642641 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.642813 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.142789885 +0000 UTC m=+150.234191335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.642870 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.643131 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.143124135 +0000 UTC m=+150.234525585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.666755 4845 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-c5c85 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.666797 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" podUID="f295e287-05b6-45e1-bfd5-3c71d7a87f15" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.668584 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srnzq" event={"ID":"b3624e54-1097-4ab1-bfff-d7e0f721f8f0","Type":"ContainerStarted","Data":"7489421b9d82561c3c59320133d54302d24308214065fb740dafa4f42a2056e8"} Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.744090 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.744340 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.244311578 +0000 UTC m=+150.335713018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:28 crc kubenswrapper[4845]: I0202 10:34:28.744810 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:28 crc kubenswrapper[4845]: E0202 10:34:28.745114 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.245106791 +0000 UTC m=+150.336508241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.133474 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.133806 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.633788197 +0000 UTC m=+150.725189647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.194279 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:29 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:29 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:29 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.194872 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.199058 4845 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-x9pr7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.199166 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" podUID="6de6b4aa-d335-4eb0-b880-7a21c9336ebf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.200799 4845 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-x9pr7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.200844 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" podUID="6de6b4aa-d335-4eb0-b880-7a21c9336ebf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.237255 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.237611 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.737596696 +0000 UTC m=+150.828998146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.339859 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.340063 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.340168 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.840146918 +0000 UTC m=+150.931548369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.350442 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:29 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:29 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:29 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.350519 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.443986 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.444981 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:29.944963176 +0000 UTC m=+151.036364616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.492296 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqqx9"] Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.504962 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.505805 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.518125 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.548311 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.549063 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.049041462 +0000 UTC m=+151.140442912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.564008 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.573585 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.668580 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.668646 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c8101f-8598-4724-b4b8-404da68760f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"69c8101f-8598-4724-b4b8-404da68760f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.668670 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c8101f-8598-4724-b4b8-404da68760f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"69c8101f-8598-4724-b4b8-404da68760f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.668967 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.168954418 +0000 UTC m=+151.260355868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.773192 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.773775 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.273751976 +0000 UTC m=+151.365153426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.774080 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c8101f-8598-4724-b4b8-404da68760f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"69c8101f-8598-4724-b4b8-404da68760f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.774137 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c8101f-8598-4724-b4b8-404da68760f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"69c8101f-8598-4724-b4b8-404da68760f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.774238 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.774697 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.274678562 +0000 UTC m=+151.366080012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.773586 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqx9" event={"ID":"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06","Type":"ContainerStarted","Data":"411d114dec4634d8215b7fca2758294946426abdcb66d052869b2ccdb984e078"} Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.775039 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c8101f-8598-4724-b4b8-404da68760f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"69c8101f-8598-4724-b4b8-404da68760f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.797561 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7pz84"] Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.806133 4845 generic.go:334] "Generic (PLEG): container finished" podID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerID="bce7fc681aa538cb76a68f4632435369a39f67dafd5cfa64c4b02c6ca032214c" exitCode=0 Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.806408 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srnzq" event={"ID":"b3624e54-1097-4ab1-bfff-d7e0f721f8f0","Type":"ContainerDied","Data":"bce7fc681aa538cb76a68f4632435369a39f67dafd5cfa64c4b02c6ca032214c"} Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.815361 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.843311 4845 generic.go:334] "Generic (PLEG): container finished" podID="30bde55e-4121-4b71-b6f4-6cb3a9acd82e" containerID="523a5bcb42e2050e61412c7d355eedcd8752caa8ebcb7f962918e3ce965038aa" exitCode=0 Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.843406 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" event={"ID":"30bde55e-4121-4b71-b6f4-6cb3a9acd82e","Type":"ContainerDied","Data":"523a5bcb42e2050e61412c7d355eedcd8752caa8ebcb7f962918e3ce965038aa"} Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.877060 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c8101f-8598-4724-b4b8-404da68760f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"69c8101f-8598-4724-b4b8-404da68760f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.877837 4845 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.878239 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.878546 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.378532193 +0000 UTC m=+151.469933643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.885631 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.922403 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz25" event={"ID":"54f66031-6300-4334-8a24-bfe02897b467","Type":"ContainerStarted","Data":"822e913b7ae67619853c6c7a2090b6a15de579c20247e3b1e5e88f0c41b9c6b4"} Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.940683 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-49vf9" Feb 02 10:34:29 crc kubenswrapper[4845]: I0202 10:34:29.989113 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:29 crc kubenswrapper[4845]: E0202 10:34:29.989463 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.489451338 +0000 UTC m=+151.580852788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.038099 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xf5wp"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.085780 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf5wp"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.085966 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.091233 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.092260 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:34:30 crc kubenswrapper[4845]: E0202 10:34:30.092427 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.592412822 +0000 UTC m=+151.683814272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.132557 4845 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T10:34:29.877852223Z","Handler":null,"Name":""} Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.200528 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-catalog-content\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.200573 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.200623 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8ql7\" (UniqueName: \"kubernetes.io/projected/3ceca4a8-b0dd-47cc-a1fe-818e984af772-kube-api-access-c8ql7\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.200659 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-utilities\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: E0202 10:34:30.211954 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.700917267 +0000 UTC m=+151.792318717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-thf72" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.221585 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nb8f9"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.222595 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.228464 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nb8f9"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.234096 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.236903 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-566gk"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.301404 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.301644 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-catalog-content\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.301736 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8ql7\" (UniqueName: \"kubernetes.io/projected/3ceca4a8-b0dd-47cc-a1fe-818e984af772-kube-api-access-c8ql7\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.301770 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-utilities\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.302303 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-utilities\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: E0202 10:34:30.302388 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:34:30.802373738 +0000 UTC m=+151.893775188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.303171 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-catalog-content\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.339194 4845 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.339518 4845 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.350507 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8ql7\" (UniqueName: \"kubernetes.io/projected/3ceca4a8-b0dd-47cc-a1fe-818e984af772-kube-api-access-c8ql7\") pod \"redhat-marketplace-xf5wp\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.373131 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:30 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:30 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:30 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.373195 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.403769 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-catalog-content\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.403829 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg7pg\" (UniqueName: \"kubernetes.io/projected/66911d31-17db-4d9e-b0c2-9cb699fc0778-kube-api-access-fg7pg\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.403861 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.403924 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-utilities\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.409984 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89bwt"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.411005 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.413698 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89bwt"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.427741 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.427781 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.481298 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.504680 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-catalog-content\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.504711 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7f2m\" (UniqueName: \"kubernetes.io/projected/346c427b-6ed6-4bac-ae1f-ee2400ab6884-kube-api-access-p7f2m\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.504743 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-catalog-content\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.504783 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg7pg\" (UniqueName: \"kubernetes.io/projected/66911d31-17db-4d9e-b0c2-9cb699fc0778-kube-api-access-fg7pg\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.504828 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-utilities\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.504842 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-utilities\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.505201 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-utilities\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.505413 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-catalog-content\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.545209 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg7pg\" (UniqueName: \"kubernetes.io/projected/66911d31-17db-4d9e-b0c2-9cb699fc0778-kube-api-access-fg7pg\") pod \"redhat-operators-nb8f9\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.570425 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.578788 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jxk8q"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.579766 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.606236 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7f2m\" (UniqueName: \"kubernetes.io/projected/346c427b-6ed6-4bac-ae1f-ee2400ab6884-kube-api-access-p7f2m\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.606478 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-utilities\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.606609 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-catalog-content\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.607472 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-catalog-content\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.608078 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-utilities\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.617653 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxk8q"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.638630 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.682340 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7f2m\" (UniqueName: \"kubernetes.io/projected/346c427b-6ed6-4bac-ae1f-ee2400ab6884-kube-api-access-p7f2m\") pod \"redhat-marketplace-89bwt\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.713941 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-catalog-content\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.714255 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-928mf\" (UniqueName: \"kubernetes.io/projected/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-kube-api-access-928mf\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.714287 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-utilities\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.737228 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-thf72\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.804056 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.816049 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.816577 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-catalog-content\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.816647 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-928mf\" (UniqueName: \"kubernetes.io/projected/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-kube-api-access-928mf\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.816672 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-utilities\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.817582 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-catalog-content\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.827193 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-utilities\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.841487 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.845385 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-928mf\" (UniqueName: \"kubernetes.io/projected/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-kube-api-access-928mf\") pod \"redhat-operators-jxk8q\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: E0202 10:34:30.879127 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod149fcd2d_91c2_493a_a1ec_c8675e1901ef.slice/crio-conmon-4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.889837 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf5wp"] Feb 02 10:34:30 crc kubenswrapper[4845]: W0202 10:34:30.908284 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ceca4a8_b0dd_47cc_a1fe_818e984af772.slice/crio-0d84e03378117188ed8428e61cf660af9bf78e4894f24fa58ab65c169ebc8078 WatchSource:0}: Error finding container 0d84e03378117188ed8428e61cf660af9bf78e4894f24fa58ab65c169ebc8078: Status 404 returned error can't find the container with id 0d84e03378117188ed8428e61cf660af9bf78e4894f24fa58ab65c169ebc8078 Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.931303 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.956357 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9bfe19dbbe7d1e40546e552ab43d37862197f4f4cfb7d08bb020bb00479fc6ba"} Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.956407 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d48d1c6c37d0a5e8ad259c05304325a7b427c719454c390b49bf80e8494319ac"} Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.965464 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz25" event={"ID":"54f66031-6300-4334-8a24-bfe02897b467","Type":"ContainerStarted","Data":"7714a652cda8199b9ff2d1687cc63840be0d7d6f127a29a22cba6c4a5e814604"} Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.968130 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"69c8101f-8598-4724-b4b8-404da68760f9","Type":"ContainerStarted","Data":"056e30bf0adad14504d00252db500715c4d9cd3fe159929432752cd0a022f096"} Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.972728 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1dc524670dd7946b51dffe84822fe5d9ef966942bccaf3e1442ebb4e88426872"} Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.972776 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8f43a75f7821fc1df1d7f00deeace7d0a569fb02a8aaa97af12caa17613db502"} Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.988691 4845 generic.go:334] "Generic (PLEG): container finished" podID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerID="79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822" exitCode=0 Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.988777 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566gk" event={"ID":"128f32ab-e2ce-4468-a7e8-bc84aa2bb275","Type":"ContainerDied","Data":"79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822"} Feb 02 10:34:30 crc kubenswrapper[4845]: I0202 10:34:30.988806 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566gk" event={"ID":"128f32ab-e2ce-4468-a7e8-bc84aa2bb275","Type":"ContainerStarted","Data":"9bcdab2008f0c99412111dda17b5eb250f02325e2f7935a2aadaa0f85ebf0c92"} Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.003146 4845 generic.go:334] "Generic (PLEG): container finished" podID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerID="9aa6e5661ba4da1e3f07e47952d04aef1b22875db8155df2a2c40d59b58e9f5c" exitCode=0 Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.003424 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqx9" event={"ID":"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06","Type":"ContainerDied","Data":"9aa6e5661ba4da1e3f07e47952d04aef1b22875db8155df2a2c40d59b58e9f5c"} Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.010706 4845 generic.go:334] "Generic (PLEG): container finished" podID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerID="4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358" exitCode=0 Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.010773 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pz84" event={"ID":"149fcd2d-91c2-493a-a1ec-c8675e1901ef","Type":"ContainerDied","Data":"4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358"} Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.010797 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pz84" event={"ID":"149fcd2d-91c2-493a-a1ec-c8675e1901ef","Type":"ContainerStarted","Data":"9c90348dba10073c8c8427f15a5626e4e8ea2c366d1f13b9219eb720f44735c2"} Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.023941 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.049687 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e2a63c510b246dde839c36e8a8c8d5dbdbe226d8530c8f27fb153a04a8c8aef5"} Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.049723 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"acf6c6962b036d9bc4c7a4d78a1dc9f7b42ce529abfeb5c629546e391eba24c4"} Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.050270 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.176354 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wsz25" podStartSLOduration=11.17633466 podStartE2EDuration="11.17633466s" podCreationTimestamp="2026-02-02 10:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:31.068456903 +0000 UTC m=+152.159858353" watchObservedRunningTime="2026-02-02 10:34:31.17633466 +0000 UTC m=+152.267736110" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.182169 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.183724 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.193077 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.193683 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.206454 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.211226 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x9pr7" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.239702 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nb8f9"] Feb 02 10:34:31 crc kubenswrapper[4845]: W0202 10:34:31.258254 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66911d31_17db_4d9e_b0c2_9cb699fc0778.slice/crio-16c241a98cc49754e7cd69effe7e44d81157009a11eabff0c36f137b39003b4c WatchSource:0}: Error finding container 16c241a98cc49754e7cd69effe7e44d81157009a11eabff0c36f137b39003b4c: Status 404 returned error can't find the container with id 16c241a98cc49754e7cd69effe7e44d81157009a11eabff0c36f137b39003b4c Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.329508 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2246afc-db13-479f-8ce0-fbfd40b28302-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d2246afc-db13-479f-8ce0-fbfd40b28302\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.330737 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2246afc-db13-479f-8ce0-fbfd40b28302-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d2246afc-db13-479f-8ce0-fbfd40b28302\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.369149 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:31 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:31 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:31 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.369202 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.432729 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2246afc-db13-479f-8ce0-fbfd40b28302-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d2246afc-db13-479f-8ce0-fbfd40b28302\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.432859 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2246afc-db13-479f-8ce0-fbfd40b28302-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d2246afc-db13-479f-8ce0-fbfd40b28302\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.432961 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2246afc-db13-479f-8ce0-fbfd40b28302-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d2246afc-db13-479f-8ce0-fbfd40b28302\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.464068 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2246afc-db13-479f-8ce0-fbfd40b28302-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d2246afc-db13-479f-8ce0-fbfd40b28302\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.469225 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89bwt"] Feb 02 10:34:31 crc kubenswrapper[4845]: W0202 10:34:31.481775 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod346c427b_6ed6_4bac_ae1f_ee2400ab6884.slice/crio-6eeeecc6f7635724fe5c5de24d8b0a5978478aedd321e7117013fb83d5e17bed WatchSource:0}: Error finding container 6eeeecc6f7635724fe5c5de24d8b0a5978478aedd321e7117013fb83d5e17bed: Status 404 returned error can't find the container with id 6eeeecc6f7635724fe5c5de24d8b0a5978478aedd321e7117013fb83d5e17bed Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.523118 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.540490 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-thf72"] Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.637732 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:31 crc kubenswrapper[4845]: W0202 10:34:31.680831 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1579ee3d_0fc7_456f_a78a_eb18aa7bf2bd.slice/crio-45e45e9dbbedd64142e2fefb82cba902c15048017b203f1719c5975bd5a5ec46 WatchSource:0}: Error finding container 45e45e9dbbedd64142e2fefb82cba902c15048017b203f1719c5975bd5a5ec46: Status 404 returned error can't find the container with id 45e45e9dbbedd64142e2fefb82cba902c15048017b203f1719c5975bd5a5ec46 Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.686706 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxk8q"] Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.725329 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.737395 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-secret-volume\") pod \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.737428 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6jd9\" (UniqueName: \"kubernetes.io/projected/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-kube-api-access-j6jd9\") pod \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.737523 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-config-volume\") pod \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\" (UID: \"30bde55e-4121-4b71-b6f4-6cb3a9acd82e\") " Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.738212 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-config-volume" (OuterVolumeSpecName: "config-volume") pod "30bde55e-4121-4b71-b6f4-6cb3a9acd82e" (UID: "30bde55e-4121-4b71-b6f4-6cb3a9acd82e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.747415 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-kube-api-access-j6jd9" (OuterVolumeSpecName: "kube-api-access-j6jd9") pod "30bde55e-4121-4b71-b6f4-6cb3a9acd82e" (UID: "30bde55e-4121-4b71-b6f4-6cb3a9acd82e"). InnerVolumeSpecName "kube-api-access-j6jd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.755685 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "30bde55e-4121-4b71-b6f4-6cb3a9acd82e" (UID: "30bde55e-4121-4b71-b6f4-6cb3a9acd82e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.840541 4845 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.840565 4845 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:31 crc kubenswrapper[4845]: I0202 10:34:31.840576 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6jd9\" (UniqueName: \"kubernetes.io/projected/30bde55e-4121-4b71-b6f4-6cb3a9acd82e-kube-api-access-j6jd9\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.040033 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:34:32 crc kubenswrapper[4845]: W0202 10:34:32.052197 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd2246afc_db13_479f_8ce0_fbfd40b28302.slice/crio-0870c32ce60ba45e38648a70459251980f5f1985ecf0756c2c03ed4ce2b7941b WatchSource:0}: Error finding container 0870c32ce60ba45e38648a70459251980f5f1985ecf0756c2c03ed4ce2b7941b: Status 404 returned error can't find the container with id 0870c32ce60ba45e38648a70459251980f5f1985ecf0756c2c03ed4ce2b7941b Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.064559 4845 generic.go:334] "Generic (PLEG): container finished" podID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerID="df3414b89201cc98711df2db8d1c899c06cc968a324e0b8467c7e36e96868e51" exitCode=0 Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.064626 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf5wp" event={"ID":"3ceca4a8-b0dd-47cc-a1fe-818e984af772","Type":"ContainerDied","Data":"df3414b89201cc98711df2db8d1c899c06cc968a324e0b8467c7e36e96868e51"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.064649 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf5wp" event={"ID":"3ceca4a8-b0dd-47cc-a1fe-818e984af772","Type":"ContainerStarted","Data":"0d84e03378117188ed8428e61cf660af9bf78e4894f24fa58ab65c169ebc8078"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.069000 4845 generic.go:334] "Generic (PLEG): container finished" podID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerID="f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175" exitCode=0 Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.069080 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxk8q" event={"ID":"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd","Type":"ContainerDied","Data":"f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.069109 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxk8q" event={"ID":"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd","Type":"ContainerStarted","Data":"45e45e9dbbedd64142e2fefb82cba902c15048017b203f1719c5975bd5a5ec46"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.086078 4845 generic.go:334] "Generic (PLEG): container finished" podID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerID="ac8e195b0cbc3f127a71603eaf254c9928ef5832347572bb602ee62efcc4964c" exitCode=0 Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.086185 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89bwt" event={"ID":"346c427b-6ed6-4bac-ae1f-ee2400ab6884","Type":"ContainerDied","Data":"ac8e195b0cbc3f127a71603eaf254c9928ef5832347572bb602ee62efcc4964c"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.086250 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89bwt" event={"ID":"346c427b-6ed6-4bac-ae1f-ee2400ab6884","Type":"ContainerStarted","Data":"6eeeecc6f7635724fe5c5de24d8b0a5978478aedd321e7117013fb83d5e17bed"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.094483 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" event={"ID":"30bde55e-4121-4b71-b6f4-6cb3a9acd82e","Type":"ContainerDied","Data":"784d840cf0db3fd4333a0324bf44adc6d663afbb97888004d15f7427f54cbe89"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.094527 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="784d840cf0db3fd4333a0324bf44adc6d663afbb97888004d15f7427f54cbe89" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.094591 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.115084 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.118334 4845 generic.go:334] "Generic (PLEG): container finished" podID="69c8101f-8598-4724-b4b8-404da68760f9" containerID="160bf3d5bf0112854d2dffd0786d9579ef214a1860260acb1922ad96e7bc5212" exitCode=0 Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.118390 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"69c8101f-8598-4724-b4b8-404da68760f9","Type":"ContainerDied","Data":"160bf3d5bf0112854d2dffd0786d9579ef214a1860260acb1922ad96e7bc5212"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.125032 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xsdsh" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.130459 4845 generic.go:334] "Generic (PLEG): container finished" podID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerID="ce02065ee7ea69eff70e052c4a44aab42be4dcba45489f2ca7fc1dcf482f400b" exitCode=0 Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.130523 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nb8f9" event={"ID":"66911d31-17db-4d9e-b0c2-9cb699fc0778","Type":"ContainerDied","Data":"ce02065ee7ea69eff70e052c4a44aab42be4dcba45489f2ca7fc1dcf482f400b"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.130584 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nb8f9" event={"ID":"66911d31-17db-4d9e-b0c2-9cb699fc0778","Type":"ContainerStarted","Data":"16c241a98cc49754e7cd69effe7e44d81157009a11eabff0c36f137b39003b4c"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.175210 4845 patch_prober.go:28] interesting pod/downloads-7954f5f757-4rbqr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.175457 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4rbqr" podUID="a70e2a3d-9afe-4437-b9ef-fe175eee93d6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.177303 4845 patch_prober.go:28] interesting pod/downloads-7954f5f757-4rbqr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.177362 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4rbqr" podUID="a70e2a3d-9afe-4437-b9ef-fe175eee93d6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.180912 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" event={"ID":"339fe372-b3de-4832-b32f-0218d2c0545b","Type":"ContainerStarted","Data":"c850483e123504ea082a0c8f17db4867ee8a686d50fc85724804f4fd70d8bc85"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.180958 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" event={"ID":"339fe372-b3de-4832-b32f-0218d2c0545b","Type":"ContainerStarted","Data":"b02dab70c0d575fa03917d15a317aa67ee29edaf5ecdee7aa9680da7630022b9"} Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.185571 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.357341 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.367773 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:32 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:32 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:32 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.367839 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.590252 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.590336 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.595466 4845 patch_prober.go:28] interesting pod/console-f9d7485db-8gjpm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 10:34:32 crc kubenswrapper[4845]: I0202 10:34:32.595527 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8gjpm" podUID="04d41e42-423a-4bac-bc05-3c424c978fd8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.081043 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.096444 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" podStartSLOduration=133.096426163 podStartE2EDuration="2m13.096426163s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:32.298082004 +0000 UTC m=+153.389483474" watchObservedRunningTime="2026-02-02 10:34:33.096426163 +0000 UTC m=+154.187827613" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.189577 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d2246afc-db13-479f-8ce0-fbfd40b28302","Type":"ContainerStarted","Data":"22f42242af9731519d48af37da66970e4887d85a44fa73dc5d26a8dc7828a600"} Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.189653 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d2246afc-db13-479f-8ce0-fbfd40b28302","Type":"ContainerStarted","Data":"0870c32ce60ba45e38648a70459251980f5f1985ecf0756c2c03ed4ce2b7941b"} Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.203033 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.203017923 podStartE2EDuration="2.203017923s" podCreationTimestamp="2026-02-02 10:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:33.201460478 +0000 UTC m=+154.292861928" watchObservedRunningTime="2026-02-02 10:34:33.203017923 +0000 UTC m=+154.294419373" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.363136 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:33 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:33 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:33 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.363202 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.393984 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c5c85" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.617006 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.682233 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c8101f-8598-4724-b4b8-404da68760f9-kube-api-access\") pod \"69c8101f-8598-4724-b4b8-404da68760f9\" (UID: \"69c8101f-8598-4724-b4b8-404da68760f9\") " Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.682643 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c8101f-8598-4724-b4b8-404da68760f9-kubelet-dir\") pod \"69c8101f-8598-4724-b4b8-404da68760f9\" (UID: \"69c8101f-8598-4724-b4b8-404da68760f9\") " Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.683026 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69c8101f-8598-4724-b4b8-404da68760f9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69c8101f-8598-4724-b4b8-404da68760f9" (UID: "69c8101f-8598-4724-b4b8-404da68760f9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.690921 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c8101f-8598-4724-b4b8-404da68760f9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69c8101f-8598-4724-b4b8-404da68760f9" (UID: "69c8101f-8598-4724-b4b8-404da68760f9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.784412 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c8101f-8598-4724-b4b8-404da68760f9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:33 crc kubenswrapper[4845]: I0202 10:34:33.784441 4845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c8101f-8598-4724-b4b8-404da68760f9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:34 crc kubenswrapper[4845]: I0202 10:34:34.202709 4845 generic.go:334] "Generic (PLEG): container finished" podID="d2246afc-db13-479f-8ce0-fbfd40b28302" containerID="22f42242af9731519d48af37da66970e4887d85a44fa73dc5d26a8dc7828a600" exitCode=0 Feb 02 10:34:34 crc kubenswrapper[4845]: I0202 10:34:34.202829 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d2246afc-db13-479f-8ce0-fbfd40b28302","Type":"ContainerDied","Data":"22f42242af9731519d48af37da66970e4887d85a44fa73dc5d26a8dc7828a600"} Feb 02 10:34:34 crc kubenswrapper[4845]: I0202 10:34:34.206766 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"69c8101f-8598-4724-b4b8-404da68760f9","Type":"ContainerDied","Data":"056e30bf0adad14504d00252db500715c4d9cd3fe159929432752cd0a022f096"} Feb 02 10:34:34 crc kubenswrapper[4845]: I0202 10:34:34.206799 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="056e30bf0adad14504d00252db500715c4d9cd3fe159929432752cd0a022f096" Feb 02 10:34:34 crc kubenswrapper[4845]: I0202 10:34:34.206808 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:34:34 crc kubenswrapper[4845]: I0202 10:34:34.349950 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:34 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:34 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:34 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:34 crc kubenswrapper[4845]: I0202 10:34:34.350021 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.182639 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-f2fvl" Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.349127 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:35 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:35 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:35 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.349208 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.628353 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.731198 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2246afc-db13-479f-8ce0-fbfd40b28302-kubelet-dir\") pod \"d2246afc-db13-479f-8ce0-fbfd40b28302\" (UID: \"d2246afc-db13-479f-8ce0-fbfd40b28302\") " Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.731301 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2246afc-db13-479f-8ce0-fbfd40b28302-kube-api-access\") pod \"d2246afc-db13-479f-8ce0-fbfd40b28302\" (UID: \"d2246afc-db13-479f-8ce0-fbfd40b28302\") " Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.731988 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2246afc-db13-479f-8ce0-fbfd40b28302-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d2246afc-db13-479f-8ce0-fbfd40b28302" (UID: "d2246afc-db13-479f-8ce0-fbfd40b28302"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.738055 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2246afc-db13-479f-8ce0-fbfd40b28302-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d2246afc-db13-479f-8ce0-fbfd40b28302" (UID: "d2246afc-db13-479f-8ce0-fbfd40b28302"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.834161 4845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2246afc-db13-479f-8ce0-fbfd40b28302-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:35 crc kubenswrapper[4845]: I0202 10:34:35.834210 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2246afc-db13-479f-8ce0-fbfd40b28302-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:36 crc kubenswrapper[4845]: I0202 10:34:36.227964 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d2246afc-db13-479f-8ce0-fbfd40b28302","Type":"ContainerDied","Data":"0870c32ce60ba45e38648a70459251980f5f1985ecf0756c2c03ed4ce2b7941b"} Feb 02 10:34:36 crc kubenswrapper[4845]: I0202 10:34:36.228006 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0870c32ce60ba45e38648a70459251980f5f1985ecf0756c2c03ed4ce2b7941b" Feb 02 10:34:36 crc kubenswrapper[4845]: I0202 10:34:36.228099 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:34:36 crc kubenswrapper[4845]: I0202 10:34:36.349380 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:36 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:36 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:36 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:36 crc kubenswrapper[4845]: I0202 10:34:36.349456 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:37 crc kubenswrapper[4845]: I0202 10:34:37.349669 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:37 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:37 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:37 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:37 crc kubenswrapper[4845]: I0202 10:34:37.350001 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:38 crc kubenswrapper[4845]: I0202 10:34:38.348397 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:38 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:38 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:38 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:38 crc kubenswrapper[4845]: I0202 10:34:38.348453 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:39 crc kubenswrapper[4845]: I0202 10:34:39.349742 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:39 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:39 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:39 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:39 crc kubenswrapper[4845]: I0202 10:34:39.350106 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:40 crc kubenswrapper[4845]: I0202 10:34:40.349273 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:40 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:40 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:40 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:40 crc kubenswrapper[4845]: I0202 10:34:40.349328 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:41 crc kubenswrapper[4845]: I0202 10:34:41.349450 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:41 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:41 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:41 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:41 crc kubenswrapper[4845]: I0202 10:34:41.349517 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:42 crc kubenswrapper[4845]: I0202 10:34:42.180314 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4rbqr" Feb 02 10:34:42 crc kubenswrapper[4845]: I0202 10:34:42.349212 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:42 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:42 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:42 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:42 crc kubenswrapper[4845]: I0202 10:34:42.349270 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:42 crc kubenswrapper[4845]: I0202 10:34:42.589922 4845 patch_prober.go:28] interesting pod/console-f9d7485db-8gjpm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 10:34:42 crc kubenswrapper[4845]: I0202 10:34:42.589990 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8gjpm" podUID="04d41e42-423a-4bac-bc05-3c424c978fd8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 10:34:42 crc kubenswrapper[4845]: I0202 10:34:42.745799 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:42 crc kubenswrapper[4845]: I0202 10:34:42.756266 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84cb7b66-62e7-4012-ab80-7c5e6ba51e35-metrics-certs\") pod \"network-metrics-daemon-pmn9h\" (UID: \"84cb7b66-62e7-4012-ab80-7c5e6ba51e35\") " pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:42 crc kubenswrapper[4845]: I0202 10:34:42.983517 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pmn9h" Feb 02 10:34:43 crc kubenswrapper[4845]: I0202 10:34:43.358228 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:43 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:43 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:43 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:43 crc kubenswrapper[4845]: I0202 10:34:43.358527 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:44 crc kubenswrapper[4845]: I0202 10:34:44.354839 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:44 crc kubenswrapper[4845]: [-]has-synced failed: reason withheld Feb 02 10:34:44 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:44 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:44 crc kubenswrapper[4845]: I0202 10:34:44.354943 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:45 crc kubenswrapper[4845]: I0202 10:34:45.351142 4845 patch_prober.go:28] interesting pod/router-default-5444994796-rbhk2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:34:45 crc kubenswrapper[4845]: [+]has-synced ok Feb 02 10:34:45 crc kubenswrapper[4845]: [+]process-running ok Feb 02 10:34:45 crc kubenswrapper[4845]: healthz check failed Feb 02 10:34:45 crc kubenswrapper[4845]: I0202 10:34:45.351238 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rbhk2" podUID="8291b32a-8322-4027-af13-cd9f10390406" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:34:46 crc kubenswrapper[4845]: I0202 10:34:46.238383 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:34:46 crc kubenswrapper[4845]: I0202 10:34:46.238468 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:34:46 crc kubenswrapper[4845]: I0202 10:34:46.349357 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:46 crc kubenswrapper[4845]: I0202 10:34:46.352053 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rbhk2" Feb 02 10:34:47 crc kubenswrapper[4845]: I0202 10:34:47.025153 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z9qjh"] Feb 02 10:34:47 crc kubenswrapper[4845]: I0202 10:34:47.025660 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" podUID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" containerName="controller-manager" containerID="cri-o://f9b9887e548d54bb14e6cb1ffbf2477aee082f38d18a932c8121b4adcb811fca" gracePeriod=30 Feb 02 10:34:47 crc kubenswrapper[4845]: I0202 10:34:47.042278 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49"] Feb 02 10:34:47 crc kubenswrapper[4845]: I0202 10:34:47.042715 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" podUID="ac875c91-285a-420b-9065-50af53ab50d3" containerName="route-controller-manager" containerID="cri-o://1fcb883e8f41725d44677baa16381c121b0232bd5110fcb8fbbbde75a1dff506" gracePeriod=30 Feb 02 10:34:48 crc kubenswrapper[4845]: I0202 10:34:48.326559 4845 generic.go:334] "Generic (PLEG): container finished" podID="ac875c91-285a-420b-9065-50af53ab50d3" containerID="1fcb883e8f41725d44677baa16381c121b0232bd5110fcb8fbbbde75a1dff506" exitCode=0 Feb 02 10:34:48 crc kubenswrapper[4845]: I0202 10:34:48.326646 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" event={"ID":"ac875c91-285a-420b-9065-50af53ab50d3","Type":"ContainerDied","Data":"1fcb883e8f41725d44677baa16381c121b0232bd5110fcb8fbbbde75a1dff506"} Feb 02 10:34:48 crc kubenswrapper[4845]: I0202 10:34:48.328767 4845 generic.go:334] "Generic (PLEG): container finished" podID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" containerID="f9b9887e548d54bb14e6cb1ffbf2477aee082f38d18a932c8121b4adcb811fca" exitCode=0 Feb 02 10:34:48 crc kubenswrapper[4845]: I0202 10:34:48.328829 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" event={"ID":"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd","Type":"ContainerDied","Data":"f9b9887e548d54bb14e6cb1ffbf2477aee082f38d18a932c8121b4adcb811fca"} Feb 02 10:34:51 crc kubenswrapper[4845]: I0202 10:34:51.028719 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:34:52 crc kubenswrapper[4845]: I0202 10:34:52.594307 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:52 crc kubenswrapper[4845]: I0202 10:34:52.597842 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:34:52 crc kubenswrapper[4845]: I0202 10:34:52.846754 4845 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z9qjh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 02 10:34:52 crc kubenswrapper[4845]: I0202 10:34:52.846835 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" podUID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 02 10:34:53 crc kubenswrapper[4845]: I0202 10:34:53.064183 4845 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jvc49 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:34:53 crc kubenswrapper[4845]: I0202 10:34:53.064240 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" podUID="ac875c91-285a-420b-9065-50af53ab50d3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.511599 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.549498 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q"] Feb 02 10:34:54 crc kubenswrapper[4845]: E0202 10:34:54.549801 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c8101f-8598-4724-b4b8-404da68760f9" containerName="pruner" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.549821 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c8101f-8598-4724-b4b8-404da68760f9" containerName="pruner" Feb 02 10:34:54 crc kubenswrapper[4845]: E0202 10:34:54.549839 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2246afc-db13-479f-8ce0-fbfd40b28302" containerName="pruner" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.549850 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2246afc-db13-479f-8ce0-fbfd40b28302" containerName="pruner" Feb 02 10:34:54 crc kubenswrapper[4845]: E0202 10:34:54.549871 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bde55e-4121-4b71-b6f4-6cb3a9acd82e" containerName="collect-profiles" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.549916 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bde55e-4121-4b71-b6f4-6cb3a9acd82e" containerName="collect-profiles" Feb 02 10:34:54 crc kubenswrapper[4845]: E0202 10:34:54.549936 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac875c91-285a-420b-9065-50af53ab50d3" containerName="route-controller-manager" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.549947 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac875c91-285a-420b-9065-50af53ab50d3" containerName="route-controller-manager" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.550297 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac875c91-285a-420b-9065-50af53ab50d3" containerName="route-controller-manager" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.550318 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c8101f-8598-4724-b4b8-404da68760f9" containerName="pruner" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.550335 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bde55e-4121-4b71-b6f4-6cb3a9acd82e" containerName="collect-profiles" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.550352 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2246afc-db13-479f-8ce0-fbfd40b28302" containerName="pruner" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.550960 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.555498 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q"] Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.620782 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-client-ca\") pod \"ac875c91-285a-420b-9065-50af53ab50d3\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.620850 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw74n\" (UniqueName: \"kubernetes.io/projected/ac875c91-285a-420b-9065-50af53ab50d3-kube-api-access-rw74n\") pod \"ac875c91-285a-420b-9065-50af53ab50d3\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.620871 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac875c91-285a-420b-9065-50af53ab50d3-serving-cert\") pod \"ac875c91-285a-420b-9065-50af53ab50d3\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.620968 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-config\") pod \"ac875c91-285a-420b-9065-50af53ab50d3\" (UID: \"ac875c91-285a-420b-9065-50af53ab50d3\") " Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.621581 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-client-ca" (OuterVolumeSpecName: "client-ca") pod "ac875c91-285a-420b-9065-50af53ab50d3" (UID: "ac875c91-285a-420b-9065-50af53ab50d3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.621617 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-config" (OuterVolumeSpecName: "config") pod "ac875c91-285a-420b-9065-50af53ab50d3" (UID: "ac875c91-285a-420b-9065-50af53ab50d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.625716 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac875c91-285a-420b-9065-50af53ab50d3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ac875c91-285a-420b-9065-50af53ab50d3" (UID: "ac875c91-285a-420b-9065-50af53ab50d3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.634438 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac875c91-285a-420b-9065-50af53ab50d3-kube-api-access-rw74n" (OuterVolumeSpecName: "kube-api-access-rw74n") pod "ac875c91-285a-420b-9065-50af53ab50d3" (UID: "ac875c91-285a-420b-9065-50af53ab50d3"). InnerVolumeSpecName "kube-api-access-rw74n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.729534 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-client-ca\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.729686 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxzgb\" (UniqueName: \"kubernetes.io/projected/fc9b9f1d-2d08-40fa-a147-e57ea489a514-kube-api-access-zxzgb\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.729833 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9b9f1d-2d08-40fa-a147-e57ea489a514-serving-cert\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.729926 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-config\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.730151 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw74n\" (UniqueName: \"kubernetes.io/projected/ac875c91-285a-420b-9065-50af53ab50d3-kube-api-access-rw74n\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.730174 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac875c91-285a-420b-9065-50af53ab50d3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.730186 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.730196 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac875c91-285a-420b-9065-50af53ab50d3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.831360 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9b9f1d-2d08-40fa-a147-e57ea489a514-serving-cert\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.831421 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-config\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.831462 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-client-ca\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.831492 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxzgb\" (UniqueName: \"kubernetes.io/projected/fc9b9f1d-2d08-40fa-a147-e57ea489a514-kube-api-access-zxzgb\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.832844 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-config\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.834022 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-client-ca\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.837820 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9b9f1d-2d08-40fa-a147-e57ea489a514-serving-cert\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.848046 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxzgb\" (UniqueName: \"kubernetes.io/projected/fc9b9f1d-2d08-40fa-a147-e57ea489a514-kube-api-access-zxzgb\") pod \"route-controller-manager-5ff8755c47-x4g8q\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:54 crc kubenswrapper[4845]: I0202 10:34:54.869142 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.385495 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" event={"ID":"ac875c91-285a-420b-9065-50af53ab50d3","Type":"ContainerDied","Data":"dca4acc312ecd37056dbc4edd5440def5a4b22eb4ea478d220c28a8e7aa4f810"} Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.386253 4845 scope.go:117] "RemoveContainer" containerID="1fcb883e8f41725d44677baa16381c121b0232bd5110fcb8fbbbde75a1dff506" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.385741 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.415437 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49"] Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.419641 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jvc49"] Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.724009 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac875c91-285a-420b-9065-50af53ab50d3" path="/var/lib/kubelet/pods/ac875c91-285a-420b-9065-50af53ab50d3/volumes" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.752264 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.843870 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-serving-cert\") pod \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.844016 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-proxy-ca-bundles\") pod \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.844083 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xzkk\" (UniqueName: \"kubernetes.io/projected/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-kube-api-access-4xzkk\") pod \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.844127 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-config\") pod \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.844173 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-client-ca\") pod \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\" (UID: \"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd\") " Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.844948 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" (UID: "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.844980 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-client-ca" (OuterVolumeSpecName: "client-ca") pod "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" (UID: "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.845103 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-config" (OuterVolumeSpecName: "config") pod "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" (UID: "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.849946 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" (UID: "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.850400 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-kube-api-access-4xzkk" (OuterVolumeSpecName: "kube-api-access-4xzkk") pod "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" (UID: "2fd9461a-3591-4e69-a9fd-2fd7de4d84cd"). InnerVolumeSpecName "kube-api-access-4xzkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.945665 4845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.945703 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xzkk\" (UniqueName: \"kubernetes.io/projected/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-kube-api-access-4xzkk\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.945717 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.945725 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:55 crc kubenswrapper[4845]: I0202 10:34:55.945733 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:34:56 crc kubenswrapper[4845]: I0202 10:34:56.391674 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" event={"ID":"2fd9461a-3591-4e69-a9fd-2fd7de4d84cd","Type":"ContainerDied","Data":"3b0e906a8843ecd4cf6e3849e6d3f269dfbf6989409b6ad8fb10a3cbc5ff1f7a"} Feb 02 10:34:56 crc kubenswrapper[4845]: I0202 10:34:56.391742 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z9qjh" Feb 02 10:34:56 crc kubenswrapper[4845]: I0202 10:34:56.439208 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z9qjh"] Feb 02 10:34:56 crc kubenswrapper[4845]: I0202 10:34:56.441973 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z9qjh"] Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.168362 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9"] Feb 02 10:34:57 crc kubenswrapper[4845]: E0202 10:34:57.169042 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" containerName="controller-manager" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.169062 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" containerName="controller-manager" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.169569 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" containerName="controller-manager" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.170278 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.174846 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.175012 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.175116 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.175370 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.176067 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.177332 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.188467 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.196579 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9"] Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.279192 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-proxy-ca-bundles\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.279277 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-client-ca\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.279307 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrkl\" (UniqueName: \"kubernetes.io/projected/d9d5f2d7-4523-4243-9439-0b6a0d320578-kube-api-access-smrkl\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.279333 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-config\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.279364 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d5f2d7-4523-4243-9439-0b6a0d320578-serving-cert\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.380603 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-proxy-ca-bundles\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.380699 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-client-ca\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.380728 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smrkl\" (UniqueName: \"kubernetes.io/projected/d9d5f2d7-4523-4243-9439-0b6a0d320578-kube-api-access-smrkl\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.380753 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-config\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.380777 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d5f2d7-4523-4243-9439-0b6a0d320578-serving-cert\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.382747 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-client-ca\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.383216 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-proxy-ca-bundles\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.384060 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-config\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.387790 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d5f2d7-4523-4243-9439-0b6a0d320578-serving-cert\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.396033 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smrkl\" (UniqueName: \"kubernetes.io/projected/d9d5f2d7-4523-4243-9439-0b6a0d320578-kube-api-access-smrkl\") pod \"controller-manager-7c58f68dfb-lc8q9\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: E0202 10:34:57.398524 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 10:34:57 crc kubenswrapper[4845]: E0202 10:34:57.398680 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb5dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nqqx9_openshift-marketplace(fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:34:57 crc kubenswrapper[4845]: E0202 10:34:57.400437 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nqqx9" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" Feb 02 10:34:57 crc kubenswrapper[4845]: E0202 10:34:57.461082 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 10:34:57 crc kubenswrapper[4845]: E0202 10:34:57.461222 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55ffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-srnzq_openshift-marketplace(b3624e54-1097-4ab1-bfff-d7e0f721f8f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:34:57 crc kubenswrapper[4845]: E0202 10:34:57.463134 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-srnzq" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.490214 4845 scope.go:117] "RemoveContainer" containerID="f9b9887e548d54bb14e6cb1ffbf2477aee082f38d18a932c8121b4adcb811fca" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.499706 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.703493 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q"] Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.720287 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd9461a-3591-4e69-a9fd-2fd7de4d84cd" path="/var/lib/kubelet/pods/2fd9461a-3591-4e69-a9fd-2fd7de4d84cd/volumes" Feb 02 10:34:57 crc kubenswrapper[4845]: I0202 10:34:57.727564 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pmn9h"] Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.049371 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9"] Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.402305 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" event={"ID":"84cb7b66-62e7-4012-ab80-7c5e6ba51e35","Type":"ContainerStarted","Data":"b972b425867baab7544057f945353ac29e794d6f3f614a0845cb92b6a6f17282"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.402360 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" event={"ID":"84cb7b66-62e7-4012-ab80-7c5e6ba51e35","Type":"ContainerStarted","Data":"b590edabf427ea4389dbc136eecc0e6927f2ff445e18564798b3169338003976"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.406006 4845 generic.go:334] "Generic (PLEG): container finished" podID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerID="3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b" exitCode=0 Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.406082 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566gk" event={"ID":"128f32ab-e2ce-4468-a7e8-bc84aa2bb275","Type":"ContainerDied","Data":"3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.409719 4845 generic.go:334] "Generic (PLEG): container finished" podID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerID="30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992" exitCode=0 Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.409800 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pz84" event={"ID":"149fcd2d-91c2-493a-a1ec-c8675e1901ef","Type":"ContainerDied","Data":"30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.416100 4845 generic.go:334] "Generic (PLEG): container finished" podID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerID="d5ee5a1faa591f8599ea0b16afa6af997338eec66ea0aded98078f3d0d23d736" exitCode=0 Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.416227 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89bwt" event={"ID":"346c427b-6ed6-4bac-ae1f-ee2400ab6884","Type":"ContainerDied","Data":"d5ee5a1faa591f8599ea0b16afa6af997338eec66ea0aded98078f3d0d23d736"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.427651 4845 generic.go:334] "Generic (PLEG): container finished" podID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerID="9de0fd1bbbe5a9cccba7aa175bf6868c3db78e2836e4127189cb452f9838b8c4" exitCode=0 Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.428009 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf5wp" event={"ID":"3ceca4a8-b0dd-47cc-a1fe-818e984af772","Type":"ContainerDied","Data":"9de0fd1bbbe5a9cccba7aa175bf6868c3db78e2836e4127189cb452f9838b8c4"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.430326 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxk8q" event={"ID":"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd","Type":"ContainerStarted","Data":"3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.437714 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" event={"ID":"fc9b9f1d-2d08-40fa-a147-e57ea489a514","Type":"ContainerStarted","Data":"32b5f5cc2814e9a9f951b6cb1e99fe49b56e048a66aaa5ae91e748c145111771"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.437784 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" event={"ID":"fc9b9f1d-2d08-40fa-a147-e57ea489a514","Type":"ContainerStarted","Data":"f4312bd794368ada2b214c2bbcf464e538e85d6eda329587cacdfdcfaf47a059"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.437879 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.439443 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nb8f9" event={"ID":"66911d31-17db-4d9e-b0c2-9cb699fc0778","Type":"ContainerStarted","Data":"892e8dde5cc61beb8bab363fa98a836495168d8b933253d6114935636c04fbe1"} Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.445291 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.448282 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" event={"ID":"d9d5f2d7-4523-4243-9439-0b6a0d320578","Type":"ContainerStarted","Data":"1e75a66a1944a3a8fc705151c58070ce86c62b0f80c33f6ac98c68e911d70d1a"} Feb 02 10:34:58 crc kubenswrapper[4845]: E0202 10:34:58.456596 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nqqx9" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" Feb 02 10:34:58 crc kubenswrapper[4845]: E0202 10:34:58.456951 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-srnzq" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" Feb 02 10:34:58 crc kubenswrapper[4845]: I0202 10:34:58.533056 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" podStartSLOduration=11.533036742 podStartE2EDuration="11.533036742s" podCreationTimestamp="2026-02-02 10:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:58.511594431 +0000 UTC m=+179.602995911" watchObservedRunningTime="2026-02-02 10:34:58.533036742 +0000 UTC m=+179.624438192" Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.153717 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w989s"] Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.455759 4845 generic.go:334] "Generic (PLEG): container finished" podID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerID="3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5" exitCode=0 Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.455865 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxk8q" event={"ID":"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd","Type":"ContainerDied","Data":"3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5"} Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.457853 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pmn9h" event={"ID":"84cb7b66-62e7-4012-ab80-7c5e6ba51e35","Type":"ContainerStarted","Data":"348f0a2b943396c13d038ffcd12f4d08568bdbc6b8be52ba3b708d58bc280eea"} Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.459404 4845 generic.go:334] "Generic (PLEG): container finished" podID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerID="892e8dde5cc61beb8bab363fa98a836495168d8b933253d6114935636c04fbe1" exitCode=0 Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.459446 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nb8f9" event={"ID":"66911d31-17db-4d9e-b0c2-9cb699fc0778","Type":"ContainerDied","Data":"892e8dde5cc61beb8bab363fa98a836495168d8b933253d6114935636c04fbe1"} Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.460652 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" event={"ID":"d9d5f2d7-4523-4243-9439-0b6a0d320578","Type":"ContainerStarted","Data":"c948c6540acb32b5137bfcbed20acfa6856014e28e025946807e292dedde5f28"} Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.460852 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.465802 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.486363 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" podStartSLOduration=12.486347604 podStartE2EDuration="12.486347604s" podCreationTimestamp="2026-02-02 10:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:59.485210451 +0000 UTC m=+180.576611891" watchObservedRunningTime="2026-02-02 10:34:59.486347604 +0000 UTC m=+180.577749054" Feb 02 10:34:59 crc kubenswrapper[4845]: I0202 10:34:59.525170 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pmn9h" podStartSLOduration=159.525149399 podStartE2EDuration="2m39.525149399s" podCreationTimestamp="2026-02-02 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:34:59.522748569 +0000 UTC m=+180.614150029" watchObservedRunningTime="2026-02-02 10:34:59.525149399 +0000 UTC m=+180.616550849" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.476126 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pz84" event={"ID":"149fcd2d-91c2-493a-a1ec-c8675e1901ef","Type":"ContainerStarted","Data":"e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0"} Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.481473 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxk8q" event={"ID":"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd","Type":"ContainerStarted","Data":"2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb"} Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.484387 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89bwt" event={"ID":"346c427b-6ed6-4bac-ae1f-ee2400ab6884","Type":"ContainerStarted","Data":"2e01706b1e2ad6c4ace7e0a9ee3b107787a005ceeb1be7e17a00e65ea223d90e"} Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.487126 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566gk" event={"ID":"128f32ab-e2ce-4468-a7e8-bc84aa2bb275","Type":"ContainerStarted","Data":"599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c"} Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.490355 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf5wp" event={"ID":"3ceca4a8-b0dd-47cc-a1fe-818e984af772","Type":"ContainerStarted","Data":"692c027aeff58a06f776073818a159fb8a42b2840f14b77a4b11565409dfc3c8"} Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.498947 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7pz84" podStartSLOduration=4.321369343 podStartE2EDuration="33.498923084s" podCreationTimestamp="2026-02-02 10:34:27 +0000 UTC" firstStartedPulling="2026-02-02 10:34:31.014573281 +0000 UTC m=+152.105974731" lastFinishedPulling="2026-02-02 10:35:00.192127022 +0000 UTC m=+181.283528472" observedRunningTime="2026-02-02 10:35:00.494708972 +0000 UTC m=+181.586110432" watchObservedRunningTime="2026-02-02 10:35:00.498923084 +0000 UTC m=+181.590324534" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.521718 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89bwt" podStartSLOduration=2.646582586 podStartE2EDuration="30.521689114s" podCreationTimestamp="2026-02-02 10:34:30 +0000 UTC" firstStartedPulling="2026-02-02 10:34:32.129990782 +0000 UTC m=+153.221392232" lastFinishedPulling="2026-02-02 10:35:00.00509731 +0000 UTC m=+181.096498760" observedRunningTime="2026-02-02 10:35:00.515083872 +0000 UTC m=+181.606485322" watchObservedRunningTime="2026-02-02 10:35:00.521689114 +0000 UTC m=+181.613090564" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.535625 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xf5wp" podStartSLOduration=3.549269035 podStartE2EDuration="31.535602807s" podCreationTimestamp="2026-02-02 10:34:29 +0000 UTC" firstStartedPulling="2026-02-02 10:34:32.123391661 +0000 UTC m=+153.214793111" lastFinishedPulling="2026-02-02 10:35:00.109725433 +0000 UTC m=+181.201126883" observedRunningTime="2026-02-02 10:35:00.532297441 +0000 UTC m=+181.623698891" watchObservedRunningTime="2026-02-02 10:35:00.535602807 +0000 UTC m=+181.627004257" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.550547 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-566gk" podStartSLOduration=4.515709147 podStartE2EDuration="33.55052702s" podCreationTimestamp="2026-02-02 10:34:27 +0000 UTC" firstStartedPulling="2026-02-02 10:34:30.992543773 +0000 UTC m=+152.083945233" lastFinishedPulling="2026-02-02 10:35:00.027361616 +0000 UTC m=+181.118763106" observedRunningTime="2026-02-02 10:35:00.548944414 +0000 UTC m=+181.640345884" watchObservedRunningTime="2026-02-02 10:35:00.55052702 +0000 UTC m=+181.641928470" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.576728 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jxk8q" podStartSLOduration=2.455867178 podStartE2EDuration="30.576706589s" podCreationTimestamp="2026-02-02 10:34:30 +0000 UTC" firstStartedPulling="2026-02-02 10:34:32.12476249 +0000 UTC m=+153.216163940" lastFinishedPulling="2026-02-02 10:35:00.245601901 +0000 UTC m=+181.337003351" observedRunningTime="2026-02-02 10:35:00.574949708 +0000 UTC m=+181.666351158" watchObservedRunningTime="2026-02-02 10:35:00.576706589 +0000 UTC m=+181.668108049" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.809111 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.809187 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.932417 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:35:00 crc kubenswrapper[4845]: I0202 10:35:00.932464 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:35:01 crc kubenswrapper[4845]: I0202 10:35:01.496399 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nb8f9" event={"ID":"66911d31-17db-4d9e-b0c2-9cb699fc0778","Type":"ContainerStarted","Data":"bbcd1edbd954385c8b6a4defc894e861ef2b8ac66447cad7befe95fddedb8599"} Feb 02 10:35:01 crc kubenswrapper[4845]: I0202 10:35:01.518311 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nb8f9" podStartSLOduration=3.2057943939999998 podStartE2EDuration="31.518297141s" podCreationTimestamp="2026-02-02 10:34:30 +0000 UTC" firstStartedPulling="2026-02-02 10:34:32.132996119 +0000 UTC m=+153.224397569" lastFinishedPulling="2026-02-02 10:35:00.445498866 +0000 UTC m=+181.536900316" observedRunningTime="2026-02-02 10:35:01.516310353 +0000 UTC m=+182.607711813" watchObservedRunningTime="2026-02-02 10:35:01.518297141 +0000 UTC m=+182.609698581" Feb 02 10:35:01 crc kubenswrapper[4845]: I0202 10:35:01.978853 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jxk8q" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="registry-server" probeResult="failure" output=< Feb 02 10:35:01 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 10:35:01 crc kubenswrapper[4845]: > Feb 02 10:35:01 crc kubenswrapper[4845]: I0202 10:35:01.980753 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-89bwt" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="registry-server" probeResult="failure" output=< Feb 02 10:35:01 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 10:35:01 crc kubenswrapper[4845]: > Feb 02 10:35:03 crc kubenswrapper[4845]: I0202 10:35:03.045151 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7qmj" Feb 02 10:35:06 crc kubenswrapper[4845]: I0202 10:35:06.943408 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9"] Feb 02 10:35:06 crc kubenswrapper[4845]: I0202 10:35:06.943808 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" podUID="d9d5f2d7-4523-4243-9439-0b6a0d320578" containerName="controller-manager" containerID="cri-o://c948c6540acb32b5137bfcbed20acfa6856014e28e025946807e292dedde5f28" gracePeriod=30 Feb 02 10:35:07 crc kubenswrapper[4845]: I0202 10:35:07.049825 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q"] Feb 02 10:35:07 crc kubenswrapper[4845]: I0202 10:35:07.050043 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" podUID="fc9b9f1d-2d08-40fa-a147-e57ea489a514" containerName="route-controller-manager" containerID="cri-o://32b5f5cc2814e9a9f951b6cb1e99fe49b56e048a66aaa5ae91e748c145111771" gracePeriod=30 Feb 02 10:35:07 crc kubenswrapper[4845]: I0202 10:35:07.500992 4845 patch_prober.go:28] interesting pod/controller-manager-7c58f68dfb-lc8q9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 02 10:35:07 crc kubenswrapper[4845]: I0202 10:35:07.501355 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" podUID="d9d5f2d7-4523-4243-9439-0b6a0d320578" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 02 10:35:07 crc kubenswrapper[4845]: I0202 10:35:07.904518 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:35:07 crc kubenswrapper[4845]: I0202 10:35:07.904571 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:35:07 crc kubenswrapper[4845]: I0202 10:35:07.953159 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.074336 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.174996 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.175045 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.236936 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.539562 4845 generic.go:334] "Generic (PLEG): container finished" podID="d9d5f2d7-4523-4243-9439-0b6a0d320578" containerID="c948c6540acb32b5137bfcbed20acfa6856014e28e025946807e292dedde5f28" exitCode=0 Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.539647 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" event={"ID":"d9d5f2d7-4523-4243-9439-0b6a0d320578","Type":"ContainerDied","Data":"c948c6540acb32b5137bfcbed20acfa6856014e28e025946807e292dedde5f28"} Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.541478 4845 generic.go:334] "Generic (PLEG): container finished" podID="fc9b9f1d-2d08-40fa-a147-e57ea489a514" containerID="32b5f5cc2814e9a9f951b6cb1e99fe49b56e048a66aaa5ae91e748c145111771" exitCode=0 Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.542369 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" event={"ID":"fc9b9f1d-2d08-40fa-a147-e57ea489a514","Type":"ContainerDied","Data":"32b5f5cc2814e9a9f951b6cb1e99fe49b56e048a66aaa5ae91e748c145111771"} Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.585789 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.598295 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.746715 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.752097 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.778223 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-785795fb9f-wz4dw"] Feb 02 10:35:08 crc kubenswrapper[4845]: E0202 10:35:08.778468 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d5f2d7-4523-4243-9439-0b6a0d320578" containerName="controller-manager" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.778481 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d5f2d7-4523-4243-9439-0b6a0d320578" containerName="controller-manager" Feb 02 10:35:08 crc kubenswrapper[4845]: E0202 10:35:08.778494 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9b9f1d-2d08-40fa-a147-e57ea489a514" containerName="route-controller-manager" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.778501 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9b9f1d-2d08-40fa-a147-e57ea489a514" containerName="route-controller-manager" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.778648 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d5f2d7-4523-4243-9439-0b6a0d320578" containerName="controller-manager" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.778667 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9b9f1d-2d08-40fa-a147-e57ea489a514" containerName="route-controller-manager" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.780362 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.783317 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785795fb9f-wz4dw"] Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833177 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d5f2d7-4523-4243-9439-0b6a0d320578-serving-cert\") pod \"d9d5f2d7-4523-4243-9439-0b6a0d320578\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833235 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-config\") pod \"d9d5f2d7-4523-4243-9439-0b6a0d320578\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833277 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-proxy-ca-bundles\") pod \"d9d5f2d7-4523-4243-9439-0b6a0d320578\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833304 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-config\") pod \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833323 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-client-ca\") pod \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833372 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxzgb\" (UniqueName: \"kubernetes.io/projected/fc9b9f1d-2d08-40fa-a147-e57ea489a514-kube-api-access-zxzgb\") pod \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833397 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smrkl\" (UniqueName: \"kubernetes.io/projected/d9d5f2d7-4523-4243-9439-0b6a0d320578-kube-api-access-smrkl\") pod \"d9d5f2d7-4523-4243-9439-0b6a0d320578\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833428 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9b9f1d-2d08-40fa-a147-e57ea489a514-serving-cert\") pod \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\" (UID: \"fc9b9f1d-2d08-40fa-a147-e57ea489a514\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.833443 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-client-ca\") pod \"d9d5f2d7-4523-4243-9439-0b6a0d320578\" (UID: \"d9d5f2d7-4523-4243-9439-0b6a0d320578\") " Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.834032 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d9d5f2d7-4523-4243-9439-0b6a0d320578" (UID: "d9d5f2d7-4523-4243-9439-0b6a0d320578"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.834063 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-config" (OuterVolumeSpecName: "config") pod "d9d5f2d7-4523-4243-9439-0b6a0d320578" (UID: "d9d5f2d7-4523-4243-9439-0b6a0d320578"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.834045 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-client-ca" (OuterVolumeSpecName: "client-ca") pod "d9d5f2d7-4523-4243-9439-0b6a0d320578" (UID: "d9d5f2d7-4523-4243-9439-0b6a0d320578"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.834160 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc9b9f1d-2d08-40fa-a147-e57ea489a514" (UID: "fc9b9f1d-2d08-40fa-a147-e57ea489a514"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.834209 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-config" (OuterVolumeSpecName: "config") pod "fc9b9f1d-2d08-40fa-a147-e57ea489a514" (UID: "fc9b9f1d-2d08-40fa-a147-e57ea489a514"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.838753 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d5f2d7-4523-4243-9439-0b6a0d320578-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d9d5f2d7-4523-4243-9439-0b6a0d320578" (UID: "d9d5f2d7-4523-4243-9439-0b6a0d320578"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.838832 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9b9f1d-2d08-40fa-a147-e57ea489a514-kube-api-access-zxzgb" (OuterVolumeSpecName: "kube-api-access-zxzgb") pod "fc9b9f1d-2d08-40fa-a147-e57ea489a514" (UID: "fc9b9f1d-2d08-40fa-a147-e57ea489a514"). InnerVolumeSpecName "kube-api-access-zxzgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.843605 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9b9f1d-2d08-40fa-a147-e57ea489a514-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc9b9f1d-2d08-40fa-a147-e57ea489a514" (UID: "fc9b9f1d-2d08-40fa-a147-e57ea489a514"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.847185 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d5f2d7-4523-4243-9439-0b6a0d320578-kube-api-access-smrkl" (OuterVolumeSpecName: "kube-api-access-smrkl") pod "d9d5f2d7-4523-4243-9439-0b6a0d320578" (UID: "d9d5f2d7-4523-4243-9439-0b6a0d320578"). InnerVolumeSpecName "kube-api-access-smrkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.934373 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5462ffc5-3458-4343-90de-625a307d56d0-serving-cert\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.934424 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czr9\" (UniqueName: \"kubernetes.io/projected/5462ffc5-3458-4343-90de-625a307d56d0-kube-api-access-2czr9\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.934634 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-config\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.934706 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-proxy-ca-bundles\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.934775 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-client-ca\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.934951 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.934976 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9b9f1d-2d08-40fa-a147-e57ea489a514-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.934990 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxzgb\" (UniqueName: \"kubernetes.io/projected/fc9b9f1d-2d08-40fa-a147-e57ea489a514-kube-api-access-zxzgb\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.935005 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smrkl\" (UniqueName: \"kubernetes.io/projected/d9d5f2d7-4523-4243-9439-0b6a0d320578-kube-api-access-smrkl\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.935017 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9b9f1d-2d08-40fa-a147-e57ea489a514-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.935029 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.935052 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d5f2d7-4523-4243-9439-0b6a0d320578-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.935064 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:08 crc kubenswrapper[4845]: I0202 10:35:08.935078 4845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9d5f2d7-4523-4243-9439-0b6a0d320578-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.035672 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-config\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.036682 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-proxy-ca-bundles\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.036711 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-client-ca\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.036759 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5462ffc5-3458-4343-90de-625a307d56d0-serving-cert\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.036778 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2czr9\" (UniqueName: \"kubernetes.io/projected/5462ffc5-3458-4343-90de-625a307d56d0-kube-api-access-2czr9\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.037023 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-config\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.037496 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-client-ca\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.039929 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-proxy-ca-bundles\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.041902 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5462ffc5-3458-4343-90de-625a307d56d0-serving-cert\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.052328 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2czr9\" (UniqueName: \"kubernetes.io/projected/5462ffc5-3458-4343-90de-625a307d56d0-kube-api-access-2czr9\") pod \"controller-manager-785795fb9f-wz4dw\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.107609 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.549369 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" event={"ID":"fc9b9f1d-2d08-40fa-a147-e57ea489a514","Type":"ContainerDied","Data":"f4312bd794368ada2b214c2bbcf464e538e85d6eda329587cacdfdcfaf47a059"} Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.549700 4845 scope.go:117] "RemoveContainer" containerID="32b5f5cc2814e9a9f951b6cb1e99fe49b56e048a66aaa5ae91e748c145111771" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.549911 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.557350 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" event={"ID":"d9d5f2d7-4523-4243-9439-0b6a0d320578","Type":"ContainerDied","Data":"1e75a66a1944a3a8fc705151c58070ce86c62b0f80c33f6ac98c68e911d70d1a"} Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.557431 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.597196 4845 scope.go:117] "RemoveContainer" containerID="c948c6540acb32b5137bfcbed20acfa6856014e28e025946807e292dedde5f28" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.597246 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-566gk"] Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.599994 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785795fb9f-wz4dw"] Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.602967 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q"] Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.605452 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-x4g8q"] Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.617725 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9"] Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.621652 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-lc8q9"] Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.731725 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d5f2d7-4523-4243-9439-0b6a0d320578" path="/var/lib/kubelet/pods/d9d5f2d7-4523-4243-9439-0b6a0d320578/volumes" Feb 02 10:35:09 crc kubenswrapper[4845]: I0202 10:35:09.732370 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9b9f1d-2d08-40fa-a147-e57ea489a514" path="/var/lib/kubelet/pods/fc9b9f1d-2d08-40fa-a147-e57ea489a514/volumes" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.094019 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.095067 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.097197 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.097366 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.104762 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.251291 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e429cb8e-ac68-4769-8144-bd170eb88425-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e429cb8e-ac68-4769-8144-bd170eb88425\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.251382 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e429cb8e-ac68-4769-8144-bd170eb88425-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e429cb8e-ac68-4769-8144-bd170eb88425\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.352462 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e429cb8e-ac68-4769-8144-bd170eb88425-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e429cb8e-ac68-4769-8144-bd170eb88425\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.352554 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e429cb8e-ac68-4769-8144-bd170eb88425-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e429cb8e-ac68-4769-8144-bd170eb88425\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.352631 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e429cb8e-ac68-4769-8144-bd170eb88425-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e429cb8e-ac68-4769-8144-bd170eb88425\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.370944 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e429cb8e-ac68-4769-8144-bd170eb88425-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e429cb8e-ac68-4769-8144-bd170eb88425\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.410319 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.482318 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.482677 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.566399 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" event={"ID":"5462ffc5-3458-4343-90de-625a307d56d0","Type":"ContainerStarted","Data":"e682377b37f912142b25f90537ee5505ca0068713e5d79fc624dba607d356d8e"} Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.566450 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" event={"ID":"5462ffc5-3458-4343-90de-625a307d56d0","Type":"ContainerStarted","Data":"24a79b6e178f4c50f852ca424030cfff6044bf1a543bc4a2ad28acaa9dd3e5e1"} Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.566708 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.568017 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.571218 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.571262 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.576619 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.580427 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-566gk" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerName="registry-server" containerID="cri-o://599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c" gracePeriod=2 Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.623858 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" podStartSLOduration=4.623843664 podStartE2EDuration="4.623843664s" podCreationTimestamp="2026-02-02 10:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:35:10.596829141 +0000 UTC m=+191.688230611" watchObservedRunningTime="2026-02-02 10:35:10.623843664 +0000 UTC m=+191.715245114" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.632751 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.642971 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.646225 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.854205 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.894646 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.970115 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.977837 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.988482 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7pz84"] Feb 02 10:35:10 crc kubenswrapper[4845]: I0202 10:35:10.988723 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7pz84" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerName="registry-server" containerID="cri-o://e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0" gracePeriod=2 Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.006777 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.066004 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s42p\" (UniqueName: \"kubernetes.io/projected/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-kube-api-access-5s42p\") pod \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.066140 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-utilities\") pod \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.066197 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-catalog-content\") pod \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\" (UID: \"128f32ab-e2ce-4468-a7e8-bc84aa2bb275\") " Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.067251 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-utilities" (OuterVolumeSpecName: "utilities") pod "128f32ab-e2ce-4468-a7e8-bc84aa2bb275" (UID: "128f32ab-e2ce-4468-a7e8-bc84aa2bb275"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.072435 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-kube-api-access-5s42p" (OuterVolumeSpecName: "kube-api-access-5s42p") pod "128f32ab-e2ce-4468-a7e8-bc84aa2bb275" (UID: "128f32ab-e2ce-4468-a7e8-bc84aa2bb275"). InnerVolumeSpecName "kube-api-access-5s42p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.117463 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "128f32ab-e2ce-4468-a7e8-bc84aa2bb275" (UID: "128f32ab-e2ce-4468-a7e8-bc84aa2bb275"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.167104 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s42p\" (UniqueName: \"kubernetes.io/projected/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-kube-api-access-5s42p\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.167143 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.167155 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f32ab-e2ce-4468-a7e8-bc84aa2bb275-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.192296 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj"] Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.192763 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerName="extract-content" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.192774 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerName="extract-content" Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.192785 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerName="extract-utilities" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.192791 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerName="extract-utilities" Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.192800 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerName="registry-server" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.192806 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerName="registry-server" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.192911 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerName="registry-server" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.193326 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.195048 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.195083 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.195048 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.195202 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.195991 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.198669 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.200323 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj"] Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.268685 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-serving-cert\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.268819 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxt9f\" (UniqueName: \"kubernetes.io/projected/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-kube-api-access-pxt9f\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.268869 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-config\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.268924 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-client-ca\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.346529 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.369754 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxt9f\" (UniqueName: \"kubernetes.io/projected/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-kube-api-access-pxt9f\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.369813 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-config\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.369841 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-client-ca\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.369909 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-serving-cert\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.371556 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-config\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.371648 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-client-ca\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.374514 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-serving-cert\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.396506 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxt9f\" (UniqueName: \"kubernetes.io/projected/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-kube-api-access-pxt9f\") pod \"route-controller-manager-5f8f98956b-pr9zj\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.470668 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-utilities\") pod \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.470723 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvpsx\" (UniqueName: \"kubernetes.io/projected/149fcd2d-91c2-493a-a1ec-c8675e1901ef-kube-api-access-nvpsx\") pod \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.470773 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-catalog-content\") pod \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\" (UID: \"149fcd2d-91c2-493a-a1ec-c8675e1901ef\") " Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.471396 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-utilities" (OuterVolumeSpecName: "utilities") pod "149fcd2d-91c2-493a-a1ec-c8675e1901ef" (UID: "149fcd2d-91c2-493a-a1ec-c8675e1901ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.473121 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149fcd2d-91c2-493a-a1ec-c8675e1901ef-kube-api-access-nvpsx" (OuterVolumeSpecName: "kube-api-access-nvpsx") pod "149fcd2d-91c2-493a-a1ec-c8675e1901ef" (UID: "149fcd2d-91c2-493a-a1ec-c8675e1901ef"). InnerVolumeSpecName "kube-api-access-nvpsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.510433 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.521017 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149fcd2d-91c2-493a-a1ec-c8675e1901ef" (UID: "149fcd2d-91c2-493a-a1ec-c8675e1901ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.571797 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.571849 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvpsx\" (UniqueName: \"kubernetes.io/projected/149fcd2d-91c2-493a-a1ec-c8675e1901ef-kube-api-access-nvpsx\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.571868 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149fcd2d-91c2-493a-a1ec-c8675e1901ef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.587410 4845 generic.go:334] "Generic (PLEG): container finished" podID="e429cb8e-ac68-4769-8144-bd170eb88425" containerID="67bd6a8667f39a3ae555add49d53eca4ccef6d7bd5ef20691af6315ff909ac04" exitCode=0 Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.587568 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e429cb8e-ac68-4769-8144-bd170eb88425","Type":"ContainerDied","Data":"67bd6a8667f39a3ae555add49d53eca4ccef6d7bd5ef20691af6315ff909ac04"} Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.587820 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e429cb8e-ac68-4769-8144-bd170eb88425","Type":"ContainerStarted","Data":"0c7a8ca8d336bdc0a77f701f2799e13b55a207b5da77a1accc1bfdd66da2f277"} Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.590978 4845 generic.go:334] "Generic (PLEG): container finished" podID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" containerID="599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c" exitCode=0 Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.591043 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566gk" event={"ID":"128f32ab-e2ce-4468-a7e8-bc84aa2bb275","Type":"ContainerDied","Data":"599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c"} Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.591057 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-566gk" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.591069 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566gk" event={"ID":"128f32ab-e2ce-4468-a7e8-bc84aa2bb275","Type":"ContainerDied","Data":"9bcdab2008f0c99412111dda17b5eb250f02325e2f7935a2aadaa0f85ebf0c92"} Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.591085 4845 scope.go:117] "RemoveContainer" containerID="599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.594053 4845 generic.go:334] "Generic (PLEG): container finished" podID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerID="e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0" exitCode=0 Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.594803 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pz84" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.598982 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pz84" event={"ID":"149fcd2d-91c2-493a-a1ec-c8675e1901ef","Type":"ContainerDied","Data":"e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0"} Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.599041 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pz84" event={"ID":"149fcd2d-91c2-493a-a1ec-c8675e1901ef","Type":"ContainerDied","Data":"9c90348dba10073c8c8427f15a5626e4e8ea2c366d1f13b9219eb720f44735c2"} Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.682053 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.694833 4845 scope.go:117] "RemoveContainer" containerID="3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.696862 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7pz84"] Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.710590 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7pz84"] Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.746356 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" path="/var/lib/kubelet/pods/149fcd2d-91c2-493a-a1ec-c8675e1901ef/volumes" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.747100 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-566gk"] Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.747142 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-566gk"] Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.775560 4845 scope.go:117] "RemoveContainer" containerID="79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.793088 4845 scope.go:117] "RemoveContainer" containerID="599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c" Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.794562 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c\": container with ID starting with 599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c not found: ID does not exist" containerID="599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.794595 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c"} err="failed to get container status \"599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c\": rpc error: code = NotFound desc = could not find container \"599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c\": container with ID starting with 599b1bff5deca85c0c8cef3c3345030830c2da69f15c44de71241ffac78ed91c not found: ID does not exist" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.794636 4845 scope.go:117] "RemoveContainer" containerID="3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b" Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.795006 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b\": container with ID starting with 3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b not found: ID does not exist" containerID="3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.795029 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b"} err="failed to get container status \"3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b\": rpc error: code = NotFound desc = could not find container \"3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b\": container with ID starting with 3ebe20a234144bc7c881d5b70747b9174e6d6c3731b82fc0bc418fa86c82763b not found: ID does not exist" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.795048 4845 scope.go:117] "RemoveContainer" containerID="79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822" Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.795498 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822\": container with ID starting with 79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822 not found: ID does not exist" containerID="79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.795523 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822"} err="failed to get container status \"79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822\": rpc error: code = NotFound desc = could not find container \"79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822\": container with ID starting with 79beba806d11e77da768b00372c982dccf28936f17f8b1718101665cef165822 not found: ID does not exist" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.795542 4845 scope.go:117] "RemoveContainer" containerID="e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.811216 4845 scope.go:117] "RemoveContainer" containerID="30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.827624 4845 scope.go:117] "RemoveContainer" containerID="4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.839751 4845 scope.go:117] "RemoveContainer" containerID="e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0" Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.840087 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0\": container with ID starting with e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0 not found: ID does not exist" containerID="e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.840119 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0"} err="failed to get container status \"e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0\": rpc error: code = NotFound desc = could not find container \"e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0\": container with ID starting with e870b0d19662252b4f303486b370da820a85f86eaa5dc9f096cd370e72f8bdf0 not found: ID does not exist" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.840141 4845 scope.go:117] "RemoveContainer" containerID="30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992" Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.840483 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992\": container with ID starting with 30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992 not found: ID does not exist" containerID="30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.840503 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992"} err="failed to get container status \"30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992\": rpc error: code = NotFound desc = could not find container \"30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992\": container with ID starting with 30f2480599158ff15839a78a41e3c8ce5042e659014fa22c57bffdb6f66dd992 not found: ID does not exist" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.840518 4845 scope.go:117] "RemoveContainer" containerID="4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358" Feb 02 10:35:11 crc kubenswrapper[4845]: E0202 10:35:11.840825 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358\": container with ID starting with 4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358 not found: ID does not exist" containerID="4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.840845 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358"} err="failed to get container status \"4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358\": rpc error: code = NotFound desc = could not find container \"4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358\": container with ID starting with 4ea4bf24b9001d3db5017954e52445a32b7c139a1e3f3a14c2209147987ee358 not found: ID does not exist" Feb 02 10:35:11 crc kubenswrapper[4845]: I0202 10:35:11.962839 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj"] Feb 02 10:35:11 crc kubenswrapper[4845]: W0202 10:35:11.969789 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d5d1c9b_bf0d_4e5f_b70c_5460f17e53a9.slice/crio-edd76338c4cf9565b4d4c3ff6adff77448c36e6ba5ba8384ccd4a0a7e5b2dd8a WatchSource:0}: Error finding container edd76338c4cf9565b4d4c3ff6adff77448c36e6ba5ba8384ccd4a0a7e5b2dd8a: Status 404 returned error can't find the container with id edd76338c4cf9565b4d4c3ff6adff77448c36e6ba5ba8384ccd4a0a7e5b2dd8a Feb 02 10:35:12 crc kubenswrapper[4845]: I0202 10:35:12.601477 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" event={"ID":"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9","Type":"ContainerStarted","Data":"d824c9ea9648f75ba557aac1840de3f2484a4a72bc9492dd547ef7324c384bec"} Feb 02 10:35:12 crc kubenswrapper[4845]: I0202 10:35:12.601840 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" event={"ID":"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9","Type":"ContainerStarted","Data":"edd76338c4cf9565b4d4c3ff6adff77448c36e6ba5ba8384ccd4a0a7e5b2dd8a"} Feb 02 10:35:12 crc kubenswrapper[4845]: I0202 10:35:12.603077 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:12 crc kubenswrapper[4845]: I0202 10:35:12.613633 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:12 crc kubenswrapper[4845]: I0202 10:35:12.619558 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" podStartSLOduration=5.619484269 podStartE2EDuration="5.619484269s" podCreationTimestamp="2026-02-02 10:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:35:12.617841751 +0000 UTC m=+193.709243201" watchObservedRunningTime="2026-02-02 10:35:12.619484269 +0000 UTC m=+193.710885709" Feb 02 10:35:12 crc kubenswrapper[4845]: I0202 10:35:12.984319 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.089174 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e429cb8e-ac68-4769-8144-bd170eb88425-kubelet-dir\") pod \"e429cb8e-ac68-4769-8144-bd170eb88425\" (UID: \"e429cb8e-ac68-4769-8144-bd170eb88425\") " Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.089239 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e429cb8e-ac68-4769-8144-bd170eb88425-kube-api-access\") pod \"e429cb8e-ac68-4769-8144-bd170eb88425\" (UID: \"e429cb8e-ac68-4769-8144-bd170eb88425\") " Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.089484 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e429cb8e-ac68-4769-8144-bd170eb88425-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e429cb8e-ac68-4769-8144-bd170eb88425" (UID: "e429cb8e-ac68-4769-8144-bd170eb88425"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.093799 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e429cb8e-ac68-4769-8144-bd170eb88425-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e429cb8e-ac68-4769-8144-bd170eb88425" (UID: "e429cb8e-ac68-4769-8144-bd170eb88425"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.190546 4845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e429cb8e-ac68-4769-8144-bd170eb88425-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.190591 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e429cb8e-ac68-4769-8144-bd170eb88425-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.386674 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89bwt"] Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.386953 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-89bwt" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="registry-server" containerID="cri-o://2e01706b1e2ad6c4ace7e0a9ee3b107787a005ceeb1be7e17a00e65ea223d90e" gracePeriod=2 Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.627858 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e429cb8e-ac68-4769-8144-bd170eb88425","Type":"ContainerDied","Data":"0c7a8ca8d336bdc0a77f701f2799e13b55a207b5da77a1accc1bfdd66da2f277"} Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.627910 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c7a8ca8d336bdc0a77f701f2799e13b55a207b5da77a1accc1bfdd66da2f277" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.627960 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.631741 4845 generic.go:334] "Generic (PLEG): container finished" podID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerID="2e01706b1e2ad6c4ace7e0a9ee3b107787a005ceeb1be7e17a00e65ea223d90e" exitCode=0 Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.631813 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89bwt" event={"ID":"346c427b-6ed6-4bac-ae1f-ee2400ab6884","Type":"ContainerDied","Data":"2e01706b1e2ad6c4ace7e0a9ee3b107787a005ceeb1be7e17a00e65ea223d90e"} Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.724219 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="128f32ab-e2ce-4468-a7e8-bc84aa2bb275" path="/var/lib/kubelet/pods/128f32ab-e2ce-4468-a7e8-bc84aa2bb275/volumes" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.815341 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.898045 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-catalog-content\") pod \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.898108 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7f2m\" (UniqueName: \"kubernetes.io/projected/346c427b-6ed6-4bac-ae1f-ee2400ab6884-kube-api-access-p7f2m\") pod \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.898188 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-utilities\") pod \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\" (UID: \"346c427b-6ed6-4bac-ae1f-ee2400ab6884\") " Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.898948 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-utilities" (OuterVolumeSpecName: "utilities") pod "346c427b-6ed6-4bac-ae1f-ee2400ab6884" (UID: "346c427b-6ed6-4bac-ae1f-ee2400ab6884"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.906125 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346c427b-6ed6-4bac-ae1f-ee2400ab6884-kube-api-access-p7f2m" (OuterVolumeSpecName: "kube-api-access-p7f2m") pod "346c427b-6ed6-4bac-ae1f-ee2400ab6884" (UID: "346c427b-6ed6-4bac-ae1f-ee2400ab6884"). InnerVolumeSpecName "kube-api-access-p7f2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.931486 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "346c427b-6ed6-4bac-ae1f-ee2400ab6884" (UID: "346c427b-6ed6-4bac-ae1f-ee2400ab6884"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.984863 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxk8q"] Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.985100 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jxk8q" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="registry-server" containerID="cri-o://2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb" gracePeriod=2 Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.999582 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.999618 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7f2m\" (UniqueName: \"kubernetes.io/projected/346c427b-6ed6-4bac-ae1f-ee2400ab6884-kube-api-access-p7f2m\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:13 crc kubenswrapper[4845]: I0202 10:35:13.999629 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346c427b-6ed6-4bac-ae1f-ee2400ab6884-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.439672 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.505517 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-catalog-content\") pod \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.505734 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-928mf\" (UniqueName: \"kubernetes.io/projected/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-kube-api-access-928mf\") pod \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.505825 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-utilities\") pod \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\" (UID: \"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd\") " Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.506882 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-utilities" (OuterVolumeSpecName: "utilities") pod "1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" (UID: "1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.512086 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-kube-api-access-928mf" (OuterVolumeSpecName: "kube-api-access-928mf") pod "1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" (UID: "1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd"). InnerVolumeSpecName "kube-api-access-928mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.607262 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-928mf\" (UniqueName: \"kubernetes.io/projected/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-kube-api-access-928mf\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.607545 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.638939 4845 generic.go:334] "Generic (PLEG): container finished" podID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerID="2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb" exitCode=0 Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.639017 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxk8q" event={"ID":"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd","Type":"ContainerDied","Data":"2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb"} Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.639050 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxk8q" event={"ID":"1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd","Type":"ContainerDied","Data":"45e45e9dbbedd64142e2fefb82cba902c15048017b203f1719c5975bd5a5ec46"} Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.639068 4845 scope.go:117] "RemoveContainer" containerID="2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.639116 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxk8q" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.642024 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89bwt" event={"ID":"346c427b-6ed6-4bac-ae1f-ee2400ab6884","Type":"ContainerDied","Data":"6eeeecc6f7635724fe5c5de24d8b0a5978478aedd321e7117013fb83d5e17bed"} Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.642142 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89bwt" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.651388 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srnzq" event={"ID":"b3624e54-1097-4ab1-bfff-d7e0f721f8f0","Type":"ContainerStarted","Data":"5a24fa9c118523cf61e0979a46080c1c45d6df89bfac951d244d917947874a3c"} Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.658477 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" (UID: "1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.659554 4845 scope.go:117] "RemoveContainer" containerID="3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.662749 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqx9" event={"ID":"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06","Type":"ContainerStarted","Data":"b1fd42c06508a5f7cc77e759a154a4b98880587d65c3430e3b9c376f3441f3e5"} Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.695561 4845 scope.go:117] "RemoveContainer" containerID="f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.708624 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.734061 4845 scope.go:117] "RemoveContainer" containerID="2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb" Feb 02 10:35:14 crc kubenswrapper[4845]: E0202 10:35:14.734630 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb\": container with ID starting with 2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb not found: ID does not exist" containerID="2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.734684 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb"} err="failed to get container status \"2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb\": rpc error: code = NotFound desc = could not find container \"2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb\": container with ID starting with 2cec9c5daa4e3c700a159a8ca737ab0f93ab3b9ee5094f6f92b629949b0337fb not found: ID does not exist" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.734714 4845 scope.go:117] "RemoveContainer" containerID="3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5" Feb 02 10:35:14 crc kubenswrapper[4845]: E0202 10:35:14.735009 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5\": container with ID starting with 3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5 not found: ID does not exist" containerID="3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.735094 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5"} err="failed to get container status \"3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5\": rpc error: code = NotFound desc = could not find container \"3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5\": container with ID starting with 3e41e91374d41dc9396251c1a3103698cf4930fd702dd17f19478e9e82fee7f5 not found: ID does not exist" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.735177 4845 scope.go:117] "RemoveContainer" containerID="f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175" Feb 02 10:35:14 crc kubenswrapper[4845]: E0202 10:35:14.735975 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175\": container with ID starting with f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175 not found: ID does not exist" containerID="f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.736083 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175"} err="failed to get container status \"f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175\": rpc error: code = NotFound desc = could not find container \"f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175\": container with ID starting with f3f753184491872d9d76899a0dbcf4380da09a3a8edc986931923c603f6e9175 not found: ID does not exist" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.736159 4845 scope.go:117] "RemoveContainer" containerID="2e01706b1e2ad6c4ace7e0a9ee3b107787a005ceeb1be7e17a00e65ea223d90e" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.744005 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89bwt"] Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.749249 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-89bwt"] Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.750632 4845 scope.go:117] "RemoveContainer" containerID="d5ee5a1faa591f8599ea0b16afa6af997338eec66ea0aded98078f3d0d23d736" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.769518 4845 scope.go:117] "RemoveContainer" containerID="ac8e195b0cbc3f127a71603eaf254c9928ef5832347572bb602ee62efcc4964c" Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.964004 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxk8q"] Feb 02 10:35:14 crc kubenswrapper[4845]: I0202 10:35:14.969179 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jxk8q"] Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.675931 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqx9" event={"ID":"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06","Type":"ContainerDied","Data":"b1fd42c06508a5f7cc77e759a154a4b98880587d65c3430e3b9c376f3441f3e5"} Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.675931 4845 generic.go:334] "Generic (PLEG): container finished" podID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerID="b1fd42c06508a5f7cc77e759a154a4b98880587d65c3430e3b9c376f3441f3e5" exitCode=0 Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.688542 4845 generic.go:334] "Generic (PLEG): container finished" podID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerID="5a24fa9c118523cf61e0979a46080c1c45d6df89bfac951d244d917947874a3c" exitCode=0 Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.688579 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srnzq" event={"ID":"b3624e54-1097-4ab1-bfff-d7e0f721f8f0","Type":"ContainerDied","Data":"5a24fa9c118523cf61e0979a46080c1c45d6df89bfac951d244d917947874a3c"} Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.719455 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" path="/var/lib/kubelet/pods/1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd/volumes" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.720294 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" path="/var/lib/kubelet/pods/346c427b-6ed6-4bac-ae1f-ee2400ab6884/volumes" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897211 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897446 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="extract-utilities" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897458 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="extract-utilities" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897471 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897477 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897485 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerName="extract-utilities" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897491 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerName="extract-utilities" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897497 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="extract-utilities" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897502 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="extract-utilities" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897511 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerName="extract-content" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897516 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerName="extract-content" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897522 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e429cb8e-ac68-4769-8144-bd170eb88425" containerName="pruner" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897528 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="e429cb8e-ac68-4769-8144-bd170eb88425" containerName="pruner" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897536 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="extract-content" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897543 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="extract-content" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897550 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="extract-content" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897555 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="extract-content" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897561 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897567 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: E0202 10:35:15.897578 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897584 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897670 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="346c427b-6ed6-4bac-ae1f-ee2400ab6884" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897683 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="e429cb8e-ac68-4769-8144-bd170eb88425" containerName="pruner" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897690 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1579ee3d-0fc7-456f-a78a-eb18aa7bf2bd" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.897696 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="149fcd2d-91c2-493a-a1ec-c8675e1901ef" containerName="registry-server" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.898048 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.903251 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.903972 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.904839 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.923686 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kube-api-access\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.923772 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-var-lock\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:15 crc kubenswrapper[4845]: I0202 10:35:15.923830 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.025450 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kube-api-access\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.025516 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-var-lock\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.025548 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.025625 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.025645 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-var-lock\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.043152 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kube-api-access\") pod \"installer-9-crc\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.226742 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.241549 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.241620 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.695195 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srnzq" event={"ID":"b3624e54-1097-4ab1-bfff-d7e0f721f8f0","Type":"ContainerStarted","Data":"68d3140ebd20281e9eddc75449b750e9027bcd03b8ff2f48c1cab9686d75572d"} Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.699072 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqx9" event={"ID":"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06","Type":"ContainerStarted","Data":"5769d4596805c8a147c91069eb2109528eac8714acdda17a418b761289524bf3"} Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.717878 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.724167 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-srnzq" podStartSLOduration=3.354625771 podStartE2EDuration="49.724151002s" podCreationTimestamp="2026-02-02 10:34:27 +0000 UTC" firstStartedPulling="2026-02-02 10:34:29.815042252 +0000 UTC m=+150.906443702" lastFinishedPulling="2026-02-02 10:35:16.184567483 +0000 UTC m=+197.275968933" observedRunningTime="2026-02-02 10:35:16.714138962 +0000 UTC m=+197.805540412" watchObservedRunningTime="2026-02-02 10:35:16.724151002 +0000 UTC m=+197.815552452" Feb 02 10:35:16 crc kubenswrapper[4845]: I0202 10:35:16.744096 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nqqx9" podStartSLOduration=4.700364839 podStartE2EDuration="49.74407326s" podCreationTimestamp="2026-02-02 10:34:27 +0000 UTC" firstStartedPulling="2026-02-02 10:34:31.006523488 +0000 UTC m=+152.097924938" lastFinishedPulling="2026-02-02 10:35:16.050231909 +0000 UTC m=+197.141633359" observedRunningTime="2026-02-02 10:35:16.74199138 +0000 UTC m=+197.833392840" watchObservedRunningTime="2026-02-02 10:35:16.74407326 +0000 UTC m=+197.835474710" Feb 02 10:35:17 crc kubenswrapper[4845]: I0202 10:35:17.516068 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:35:17 crc kubenswrapper[4845]: I0202 10:35:17.516131 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:35:17 crc kubenswrapper[4845]: I0202 10:35:17.705763 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21f3600b-d9f0-4d8e-8a5a-2b03161164b4","Type":"ContainerStarted","Data":"706922acec942089a8fd3b03c826b07b1a0b641cf0a3415a88c290fcbe0bd620"} Feb 02 10:35:17 crc kubenswrapper[4845]: I0202 10:35:17.705831 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21f3600b-d9f0-4d8e-8a5a-2b03161164b4","Type":"ContainerStarted","Data":"ae045aee676ac5379e234ac23abc2cda69a140b263c32736f6d478088a0d6ce2"} Feb 02 10:35:17 crc kubenswrapper[4845]: I0202 10:35:17.724312 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.724296029 podStartE2EDuration="2.724296029s" podCreationTimestamp="2026-02-02 10:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:35:17.723328061 +0000 UTC m=+198.814729501" watchObservedRunningTime="2026-02-02 10:35:17.724296029 +0000 UTC m=+198.815697479" Feb 02 10:35:17 crc kubenswrapper[4845]: I0202 10:35:17.739170 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:35:17 crc kubenswrapper[4845]: I0202 10:35:17.739464 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:35:18 crc kubenswrapper[4845]: I0202 10:35:18.575779 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:35:18 crc kubenswrapper[4845]: I0202 10:35:18.778404 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nqqx9" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="registry-server" probeResult="failure" output=< Feb 02 10:35:18 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 10:35:18 crc kubenswrapper[4845]: > Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.176022 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" podUID="98d47741-7063-487f-a38b-b9c398f3e07e" containerName="oauth-openshift" containerID="cri-o://93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e" gracePeriod=15 Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.667661 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.702466 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd"] Feb 02 10:35:24 crc kubenswrapper[4845]: E0202 10:35:24.702696 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d47741-7063-487f-a38b-b9c398f3e07e" containerName="oauth-openshift" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.702708 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d47741-7063-487f-a38b-b9c398f3e07e" containerName="oauth-openshift" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.702815 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d47741-7063-487f-a38b-b9c398f3e07e" containerName="oauth-openshift" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.703202 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.717452 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd"] Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739451 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-service-ca\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739554 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-idp-0-file-data\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739590 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-audit-policies\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739623 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-session\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739656 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-ocp-branding-template\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739688 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-serving-cert\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739723 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-error\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739757 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-login\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739809 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98d47741-7063-487f-a38b-b9c398f3e07e-audit-dir\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739851 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-provider-selection\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739917 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-router-certs\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.739981 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-cliconfig\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740011 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6x5c\" (UniqueName: \"kubernetes.io/projected/98d47741-7063-487f-a38b-b9c398f3e07e-kube-api-access-c6x5c\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740045 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-trusted-ca-bundle\") pod \"98d47741-7063-487f-a38b-b9c398f3e07e\" (UID: \"98d47741-7063-487f-a38b-b9c398f3e07e\") " Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740259 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34a4ebb3-154b-43c2-8566-8a638df7ecdf-audit-dir\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740304 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgvqw\" (UniqueName: \"kubernetes.io/projected/34a4ebb3-154b-43c2-8566-8a638df7ecdf-kube-api-access-mgvqw\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740333 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740365 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740402 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740443 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740489 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740525 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740564 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740592 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740644 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-audit-policies\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740674 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740696 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.740714 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.742230 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.742251 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.742386 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d47741-7063-487f-a38b-b9c398f3e07e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.742393 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.742536 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.750477 4845 generic.go:334] "Generic (PLEG): container finished" podID="98d47741-7063-487f-a38b-b9c398f3e07e" containerID="93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e" exitCode=0 Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.750547 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" event={"ID":"98d47741-7063-487f-a38b-b9c398f3e07e","Type":"ContainerDied","Data":"93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e"} Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.750560 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.750584 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w989s" event={"ID":"98d47741-7063-487f-a38b-b9c398f3e07e","Type":"ContainerDied","Data":"7e70d90a9fcf67e25e1301d6daeb2cdf5956e3a601c854abda0627c2816b60da"} Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.750604 4845 scope.go:117] "RemoveContainer" containerID="93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.751265 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.752000 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.752268 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d47741-7063-487f-a38b-b9c398f3e07e-kube-api-access-c6x5c" (OuterVolumeSpecName: "kube-api-access-c6x5c") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "kube-api-access-c6x5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.752583 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.752900 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.753969 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.754152 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.754619 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.760324 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "98d47741-7063-487f-a38b-b9c398f3e07e" (UID: "98d47741-7063-487f-a38b-b9c398f3e07e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.794034 4845 scope.go:117] "RemoveContainer" containerID="93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e" Feb 02 10:35:24 crc kubenswrapper[4845]: E0202 10:35:24.794542 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e\": container with ID starting with 93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e not found: ID does not exist" containerID="93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.794589 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e"} err="failed to get container status \"93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e\": rpc error: code = NotFound desc = could not find container \"93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e\": container with ID starting with 93e37fe117c0972dc087595f3fe9a911fd4435d1bda815a2b2a543564e02392e not found: ID does not exist" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.841809 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgvqw\" (UniqueName: \"kubernetes.io/projected/34a4ebb3-154b-43c2-8566-8a638df7ecdf-kube-api-access-mgvqw\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.841862 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.841951 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.841985 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842012 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842040 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842058 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842085 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842104 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842139 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-audit-policies\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842166 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842201 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842228 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842251 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34a4ebb3-154b-43c2-8566-8a638df7ecdf-audit-dir\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842290 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842300 4845 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842311 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842322 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842333 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842343 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842351 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842360 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842370 4845 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98d47741-7063-487f-a38b-b9c398f3e07e-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842379 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842388 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842398 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842407 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6x5c\" (UniqueName: \"kubernetes.io/projected/98d47741-7063-487f-a38b-b9c398f3e07e-kube-api-access-c6x5c\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842415 4845 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98d47741-7063-487f-a38b-b9c398f3e07e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.842450 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34a4ebb3-154b-43c2-8566-8a638df7ecdf-audit-dir\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.844338 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.844348 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.845032 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-audit-policies\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.845128 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.845586 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.846006 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.846173 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.847007 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.847121 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.848119 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.849011 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.849277 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34a4ebb3-154b-43c2-8566-8a638df7ecdf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:24 crc kubenswrapper[4845]: I0202 10:35:24.857613 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgvqw\" (UniqueName: \"kubernetes.io/projected/34a4ebb3-154b-43c2-8566-8a638df7ecdf-kube-api-access-mgvqw\") pod \"oauth-openshift-6447dfb5d9-lnbfd\" (UID: \"34a4ebb3-154b-43c2-8566-8a638df7ecdf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:25 crc kubenswrapper[4845]: I0202 10:35:25.027506 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:25 crc kubenswrapper[4845]: I0202 10:35:25.098259 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w989s"] Feb 02 10:35:25 crc kubenswrapper[4845]: I0202 10:35:25.100179 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w989s"] Feb 02 10:35:25 crc kubenswrapper[4845]: I0202 10:35:25.483086 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd"] Feb 02 10:35:25 crc kubenswrapper[4845]: W0202 10:35:25.491182 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a4ebb3_154b_43c2_8566_8a638df7ecdf.slice/crio-6982373ab744699d1914de4c3993d91b5e1f9b3638bba103bc665cbb6df59906 WatchSource:0}: Error finding container 6982373ab744699d1914de4c3993d91b5e1f9b3638bba103bc665cbb6df59906: Status 404 returned error can't find the container with id 6982373ab744699d1914de4c3993d91b5e1f9b3638bba103bc665cbb6df59906 Feb 02 10:35:25 crc kubenswrapper[4845]: I0202 10:35:25.724354 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d47741-7063-487f-a38b-b9c398f3e07e" path="/var/lib/kubelet/pods/98d47741-7063-487f-a38b-b9c398f3e07e/volumes" Feb 02 10:35:25 crc kubenswrapper[4845]: I0202 10:35:25.758346 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" event={"ID":"34a4ebb3-154b-43c2-8566-8a638df7ecdf","Type":"ContainerStarted","Data":"6982373ab744699d1914de4c3993d91b5e1f9b3638bba103bc665cbb6df59906"} Feb 02 10:35:26 crc kubenswrapper[4845]: I0202 10:35:26.764821 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" event={"ID":"34a4ebb3-154b-43c2-8566-8a638df7ecdf","Type":"ContainerStarted","Data":"5d3f37ade19ae8ca89f23cd7a194059f11dd56d8ec3ec70eaee679848e91d69d"} Feb 02 10:35:26 crc kubenswrapper[4845]: I0202 10:35:26.765053 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:26 crc kubenswrapper[4845]: I0202 10:35:26.773068 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" Feb 02 10:35:26 crc kubenswrapper[4845]: I0202 10:35:26.795935 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6447dfb5d9-lnbfd" podStartSLOduration=27.795909643 podStartE2EDuration="27.795909643s" podCreationTimestamp="2026-02-02 10:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:35:26.786560821 +0000 UTC m=+207.877962311" watchObservedRunningTime="2026-02-02 10:35:26.795909643 +0000 UTC m=+207.887311093" Feb 02 10:35:26 crc kubenswrapper[4845]: I0202 10:35:26.985679 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-785795fb9f-wz4dw"] Feb 02 10:35:26 crc kubenswrapper[4845]: I0202 10:35:26.985940 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" podUID="5462ffc5-3458-4343-90de-625a307d56d0" containerName="controller-manager" containerID="cri-o://e682377b37f912142b25f90537ee5505ca0068713e5d79fc624dba607d356d8e" gracePeriod=30 Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.017751 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj"] Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.018032 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" podUID="9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" containerName="route-controller-manager" containerID="cri-o://d824c9ea9648f75ba557aac1840de3f2484a4a72bc9492dd547ef7324c384bec" gracePeriod=30 Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.581549 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.772523 4845 generic.go:334] "Generic (PLEG): container finished" podID="5462ffc5-3458-4343-90de-625a307d56d0" containerID="e682377b37f912142b25f90537ee5505ca0068713e5d79fc624dba607d356d8e" exitCode=0 Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.772617 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" event={"ID":"5462ffc5-3458-4343-90de-625a307d56d0","Type":"ContainerDied","Data":"e682377b37f912142b25f90537ee5505ca0068713e5d79fc624dba607d356d8e"} Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.774334 4845 generic.go:334] "Generic (PLEG): container finished" podID="9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" containerID="d824c9ea9648f75ba557aac1840de3f2484a4a72bc9492dd547ef7324c384bec" exitCode=0 Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.774417 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" event={"ID":"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9","Type":"ContainerDied","Data":"d824c9ea9648f75ba557aac1840de3f2484a4a72bc9492dd547ef7324c384bec"} Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.789077 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:35:27 crc kubenswrapper[4845]: I0202 10:35:27.827657 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.139565 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.183281 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q"] Feb 02 10:35:28 crc kubenswrapper[4845]: E0202 10:35:28.183518 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" containerName="route-controller-manager" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.183532 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" containerName="route-controller-manager" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.183649 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" containerName="route-controller-manager" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.184107 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.194243 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q"] Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.202899 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-config\") pod \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.202987 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-serving-cert\") pod \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.203021 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxt9f\" (UniqueName: \"kubernetes.io/projected/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-kube-api-access-pxt9f\") pod \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.203072 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-client-ca\") pod \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\" (UID: \"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.204372 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" (UID: "9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.204628 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-config" (OuterVolumeSpecName: "config") pod "9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" (UID: "9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.208879 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" (UID: "9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.211106 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-kube-api-access-pxt9f" (OuterVolumeSpecName: "kube-api-access-pxt9f") pod "9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" (UID: "9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9"). InnerVolumeSpecName "kube-api-access-pxt9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.249265 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.304429 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-config\") pod \"5462ffc5-3458-4343-90de-625a307d56d0\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.304827 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5462ffc5-3458-4343-90de-625a307d56d0-serving-cert\") pod \"5462ffc5-3458-4343-90de-625a307d56d0\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.305350 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-config" (OuterVolumeSpecName: "config") pod "5462ffc5-3458-4343-90de-625a307d56d0" (UID: "5462ffc5-3458-4343-90de-625a307d56d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.305528 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2czr9\" (UniqueName: \"kubernetes.io/projected/5462ffc5-3458-4343-90de-625a307d56d0-kube-api-access-2czr9\") pod \"5462ffc5-3458-4343-90de-625a307d56d0\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.305707 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-client-ca\") pod \"5462ffc5-3458-4343-90de-625a307d56d0\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.305854 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-proxy-ca-bundles\") pod \"5462ffc5-3458-4343-90de-625a307d56d0\" (UID: \"5462ffc5-3458-4343-90de-625a307d56d0\") " Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.306233 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-client-ca" (OuterVolumeSpecName: "client-ca") pod "5462ffc5-3458-4343-90de-625a307d56d0" (UID: "5462ffc5-3458-4343-90de-625a307d56d0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.306255 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-config\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.306479 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmlm\" (UniqueName: \"kubernetes.io/projected/c21c318c-0429-4775-ac72-4556534d415e-kube-api-access-rwmlm\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.306596 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c21c318c-0429-4775-ac72-4556534d415e-serving-cert\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.306637 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5462ffc5-3458-4343-90de-625a307d56d0" (UID: "5462ffc5-3458-4343-90de-625a307d56d0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.306751 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-client-ca\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.306969 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.307063 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.307143 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.307294 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.307386 4845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5462ffc5-3458-4343-90de-625a307d56d0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.307512 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.307603 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxt9f\" (UniqueName: \"kubernetes.io/projected/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9-kube-api-access-pxt9f\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.307913 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5462ffc5-3458-4343-90de-625a307d56d0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5462ffc5-3458-4343-90de-625a307d56d0" (UID: "5462ffc5-3458-4343-90de-625a307d56d0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.308150 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5462ffc5-3458-4343-90de-625a307d56d0-kube-api-access-2czr9" (OuterVolumeSpecName: "kube-api-access-2czr9") pod "5462ffc5-3458-4343-90de-625a307d56d0" (UID: "5462ffc5-3458-4343-90de-625a307d56d0"). InnerVolumeSpecName "kube-api-access-2czr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.409271 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-client-ca\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.409347 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-config\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.409368 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwmlm\" (UniqueName: \"kubernetes.io/projected/c21c318c-0429-4775-ac72-4556534d415e-kube-api-access-rwmlm\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.409397 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c21c318c-0429-4775-ac72-4556534d415e-serving-cert\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.409433 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2czr9\" (UniqueName: \"kubernetes.io/projected/5462ffc5-3458-4343-90de-625a307d56d0-kube-api-access-2czr9\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.409445 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5462ffc5-3458-4343-90de-625a307d56d0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.410979 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-client-ca\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.411460 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-config\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.416107 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c21c318c-0429-4775-ac72-4556534d415e-serving-cert\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.439021 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwmlm\" (UniqueName: \"kubernetes.io/projected/c21c318c-0429-4775-ac72-4556534d415e-kube-api-access-rwmlm\") pod \"route-controller-manager-6f88969cc8-2vg4q\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.504530 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.782824 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.782810 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785795fb9f-wz4dw" event={"ID":"5462ffc5-3458-4343-90de-625a307d56d0","Type":"ContainerDied","Data":"24a79b6e178f4c50f852ca424030cfff6044bf1a543bc4a2ad28acaa9dd3e5e1"} Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.782958 4845 scope.go:117] "RemoveContainer" containerID="e682377b37f912142b25f90537ee5505ca0068713e5d79fc624dba607d356d8e" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.785792 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.785851 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj" event={"ID":"9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9","Type":"ContainerDied","Data":"edd76338c4cf9565b4d4c3ff6adff77448c36e6ba5ba8384ccd4a0a7e5b2dd8a"} Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.804576 4845 scope.go:117] "RemoveContainer" containerID="d824c9ea9648f75ba557aac1840de3f2484a4a72bc9492dd547ef7324c384bec" Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.821961 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-785795fb9f-wz4dw"] Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.832209 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-785795fb9f-wz4dw"] Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.836295 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj"] Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.840653 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f8f98956b-pr9zj"] Feb 02 10:35:28 crc kubenswrapper[4845]: I0202 10:35:28.985017 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q"] Feb 02 10:35:28 crc kubenswrapper[4845]: W0202 10:35:28.996592 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc21c318c_0429_4775_ac72_4556534d415e.slice/crio-75c70fe87f49a1adc8e4a51d12bf78e0c4d3ac5c8bc0b14cda22b9b93f914921 WatchSource:0}: Error finding container 75c70fe87f49a1adc8e4a51d12bf78e0c4d3ac5c8bc0b14cda22b9b93f914921: Status 404 returned error can't find the container with id 75c70fe87f49a1adc8e4a51d12bf78e0c4d3ac5c8bc0b14cda22b9b93f914921 Feb 02 10:35:29 crc kubenswrapper[4845]: I0202 10:35:29.718910 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5462ffc5-3458-4343-90de-625a307d56d0" path="/var/lib/kubelet/pods/5462ffc5-3458-4343-90de-625a307d56d0/volumes" Feb 02 10:35:29 crc kubenswrapper[4845]: I0202 10:35:29.719411 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9" path="/var/lib/kubelet/pods/9d5d1c9b-bf0d-4e5f-b70c-5460f17e53a9/volumes" Feb 02 10:35:29 crc kubenswrapper[4845]: I0202 10:35:29.802016 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" event={"ID":"c21c318c-0429-4775-ac72-4556534d415e","Type":"ContainerStarted","Data":"648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7"} Feb 02 10:35:29 crc kubenswrapper[4845]: I0202 10:35:29.802055 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" event={"ID":"c21c318c-0429-4775-ac72-4556534d415e","Type":"ContainerStarted","Data":"75c70fe87f49a1adc8e4a51d12bf78e0c4d3ac5c8bc0b14cda22b9b93f914921"} Feb 02 10:35:29 crc kubenswrapper[4845]: I0202 10:35:29.802358 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:29 crc kubenswrapper[4845]: I0202 10:35:29.807948 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:29 crc kubenswrapper[4845]: I0202 10:35:29.824916 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" podStartSLOduration=2.824875891 podStartE2EDuration="2.824875891s" podCreationTimestamp="2026-02-02 10:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:35:29.822060999 +0000 UTC m=+210.913462459" watchObservedRunningTime="2026-02-02 10:35:29.824875891 +0000 UTC m=+210.916277341" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.209180 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-598bc44594-t4nt7"] Feb 02 10:35:30 crc kubenswrapper[4845]: E0202 10:35:30.210064 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5462ffc5-3458-4343-90de-625a307d56d0" containerName="controller-manager" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.210228 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="5462ffc5-3458-4343-90de-625a307d56d0" containerName="controller-manager" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.210533 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="5462ffc5-3458-4343-90de-625a307d56d0" containerName="controller-manager" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.211287 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.216820 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.217015 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.218400 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.218991 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.219492 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.219916 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.232610 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.234977 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598bc44594-t4nt7"] Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.337420 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-client-ca\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.338133 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-proxy-ca-bundles\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.338493 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chfp\" (UniqueName: \"kubernetes.io/projected/70fc0f2d-6a96-4333-b777-9149d48db9a9-kube-api-access-7chfp\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.338822 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-config\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.339141 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fc0f2d-6a96-4333-b777-9149d48db9a9-serving-cert\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.441087 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-proxy-ca-bundles\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.442118 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7chfp\" (UniqueName: \"kubernetes.io/projected/70fc0f2d-6a96-4333-b777-9149d48db9a9-kube-api-access-7chfp\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.444237 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-config\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.444442 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fc0f2d-6a96-4333-b777-9149d48db9a9-serving-cert\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.444562 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-client-ca\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.443510 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-proxy-ca-bundles\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.446316 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-client-ca\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.446828 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-config\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.455205 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fc0f2d-6a96-4333-b777-9149d48db9a9-serving-cert\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.472577 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7chfp\" (UniqueName: \"kubernetes.io/projected/70fc0f2d-6a96-4333-b777-9149d48db9a9-kube-api-access-7chfp\") pod \"controller-manager-598bc44594-t4nt7\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:30 crc kubenswrapper[4845]: I0202 10:35:30.548158 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:31 crc kubenswrapper[4845]: I0202 10:35:31.049400 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598bc44594-t4nt7"] Feb 02 10:35:31 crc kubenswrapper[4845]: W0202 10:35:31.054868 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70fc0f2d_6a96_4333_b777_9149d48db9a9.slice/crio-8da502ed26de0c24190d879fb4dd0874ff65fc05bf9ee8f199f7bf5b2e694da3 WatchSource:0}: Error finding container 8da502ed26de0c24190d879fb4dd0874ff65fc05bf9ee8f199f7bf5b2e694da3: Status 404 returned error can't find the container with id 8da502ed26de0c24190d879fb4dd0874ff65fc05bf9ee8f199f7bf5b2e694da3 Feb 02 10:35:31 crc kubenswrapper[4845]: I0202 10:35:31.819553 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" event={"ID":"70fc0f2d-6a96-4333-b777-9149d48db9a9","Type":"ContainerStarted","Data":"cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb"} Feb 02 10:35:31 crc kubenswrapper[4845]: I0202 10:35:31.820082 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" event={"ID":"70fc0f2d-6a96-4333-b777-9149d48db9a9","Type":"ContainerStarted","Data":"8da502ed26de0c24190d879fb4dd0874ff65fc05bf9ee8f199f7bf5b2e694da3"} Feb 02 10:35:31 crc kubenswrapper[4845]: I0202 10:35:31.842637 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" podStartSLOduration=4.842622746 podStartE2EDuration="4.842622746s" podCreationTimestamp="2026-02-02 10:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:35:31.842220434 +0000 UTC m=+212.933621884" watchObservedRunningTime="2026-02-02 10:35:31.842622746 +0000 UTC m=+212.934024186" Feb 02 10:35:32 crc kubenswrapper[4845]: I0202 10:35:32.827526 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:32 crc kubenswrapper[4845]: I0202 10:35:32.833469 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.237602 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.238321 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.238390 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.239087 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.239177 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428" gracePeriod=600 Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.919371 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428" exitCode=0 Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.919471 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428"} Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.919987 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"df9230c12c17f28801d9b1be21f07e2881dfba8fde329097a5e90d09e1d981f3"} Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.957926 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598bc44594-t4nt7"] Feb 02 10:35:46 crc kubenswrapper[4845]: I0202 10:35:46.958161 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" podUID="70fc0f2d-6a96-4333-b777-9149d48db9a9" containerName="controller-manager" containerID="cri-o://cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb" gracePeriod=30 Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.049399 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q"] Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.049650 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" podUID="c21c318c-0429-4775-ac72-4556534d415e" containerName="route-controller-manager" containerID="cri-o://648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7" gracePeriod=30 Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.602374 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.606975 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.684874 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-proxy-ca-bundles\") pod \"70fc0f2d-6a96-4333-b777-9149d48db9a9\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.684972 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-config\") pod \"c21c318c-0429-4775-ac72-4556534d415e\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.685022 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7chfp\" (UniqueName: \"kubernetes.io/projected/70fc0f2d-6a96-4333-b777-9149d48db9a9-kube-api-access-7chfp\") pod \"70fc0f2d-6a96-4333-b777-9149d48db9a9\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.685096 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fc0f2d-6a96-4333-b777-9149d48db9a9-serving-cert\") pod \"70fc0f2d-6a96-4333-b777-9149d48db9a9\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.685132 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-client-ca\") pod \"70fc0f2d-6a96-4333-b777-9149d48db9a9\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.685227 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-config\") pod \"70fc0f2d-6a96-4333-b777-9149d48db9a9\" (UID: \"70fc0f2d-6a96-4333-b777-9149d48db9a9\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.685851 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70fc0f2d-6a96-4333-b777-9149d48db9a9" (UID: "70fc0f2d-6a96-4333-b777-9149d48db9a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.685910 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-config" (OuterVolumeSpecName: "config") pod "c21c318c-0429-4775-ac72-4556534d415e" (UID: "c21c318c-0429-4775-ac72-4556534d415e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.685938 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-config" (OuterVolumeSpecName: "config") pod "70fc0f2d-6a96-4333-b777-9149d48db9a9" (UID: "70fc0f2d-6a96-4333-b777-9149d48db9a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.686060 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "70fc0f2d-6a96-4333-b777-9149d48db9a9" (UID: "70fc0f2d-6a96-4333-b777-9149d48db9a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.686192 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-client-ca\") pod \"c21c318c-0429-4775-ac72-4556534d415e\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.686743 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-client-ca" (OuterVolumeSpecName: "client-ca") pod "c21c318c-0429-4775-ac72-4556534d415e" (UID: "c21c318c-0429-4775-ac72-4556534d415e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.686850 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmlm\" (UniqueName: \"kubernetes.io/projected/c21c318c-0429-4775-ac72-4556534d415e-kube-api-access-rwmlm\") pod \"c21c318c-0429-4775-ac72-4556534d415e\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.687368 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c21c318c-0429-4775-ac72-4556534d415e-serving-cert\") pod \"c21c318c-0429-4775-ac72-4556534d415e\" (UID: \"c21c318c-0429-4775-ac72-4556534d415e\") " Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.687717 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.687748 4845 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.687768 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21c318c-0429-4775-ac72-4556534d415e-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.687785 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.687802 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fc0f2d-6a96-4333-b777-9149d48db9a9-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.691072 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21c318c-0429-4775-ac72-4556534d415e-kube-api-access-rwmlm" (OuterVolumeSpecName: "kube-api-access-rwmlm") pod "c21c318c-0429-4775-ac72-4556534d415e" (UID: "c21c318c-0429-4775-ac72-4556534d415e"). InnerVolumeSpecName "kube-api-access-rwmlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.691156 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c21c318c-0429-4775-ac72-4556534d415e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c21c318c-0429-4775-ac72-4556534d415e" (UID: "c21c318c-0429-4775-ac72-4556534d415e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.698633 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70fc0f2d-6a96-4333-b777-9149d48db9a9-kube-api-access-7chfp" (OuterVolumeSpecName: "kube-api-access-7chfp") pod "70fc0f2d-6a96-4333-b777-9149d48db9a9" (UID: "70fc0f2d-6a96-4333-b777-9149d48db9a9"). InnerVolumeSpecName "kube-api-access-7chfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.698990 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fc0f2d-6a96-4333-b777-9149d48db9a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70fc0f2d-6a96-4333-b777-9149d48db9a9" (UID: "70fc0f2d-6a96-4333-b777-9149d48db9a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.788955 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmlm\" (UniqueName: \"kubernetes.io/projected/c21c318c-0429-4775-ac72-4556534d415e-kube-api-access-rwmlm\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.789782 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c21c318c-0429-4775-ac72-4556534d415e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.789911 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7chfp\" (UniqueName: \"kubernetes.io/projected/70fc0f2d-6a96-4333-b777-9149d48db9a9-kube-api-access-7chfp\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.789931 4845 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fc0f2d-6a96-4333-b777-9149d48db9a9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.925633 4845 generic.go:334] "Generic (PLEG): container finished" podID="c21c318c-0429-4775-ac72-4556534d415e" containerID="648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7" exitCode=0 Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.925719 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.925730 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" event={"ID":"c21c318c-0429-4775-ac72-4556534d415e","Type":"ContainerDied","Data":"648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7"} Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.926154 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q" event={"ID":"c21c318c-0429-4775-ac72-4556534d415e","Type":"ContainerDied","Data":"75c70fe87f49a1adc8e4a51d12bf78e0c4d3ac5c8bc0b14cda22b9b93f914921"} Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.926177 4845 scope.go:117] "RemoveContainer" containerID="648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.928880 4845 generic.go:334] "Generic (PLEG): container finished" podID="70fc0f2d-6a96-4333-b777-9149d48db9a9" containerID="cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb" exitCode=0 Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.928955 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" event={"ID":"70fc0f2d-6a96-4333-b777-9149d48db9a9","Type":"ContainerDied","Data":"cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb"} Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.929001 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" event={"ID":"70fc0f2d-6a96-4333-b777-9149d48db9a9","Type":"ContainerDied","Data":"8da502ed26de0c24190d879fb4dd0874ff65fc05bf9ee8f199f7bf5b2e694da3"} Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.929009 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598bc44594-t4nt7" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.947108 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q"] Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.950868 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f88969cc8-2vg4q"] Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.951037 4845 scope.go:117] "RemoveContainer" containerID="648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7" Feb 02 10:35:47 crc kubenswrapper[4845]: E0202 10:35:47.952739 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7\": container with ID starting with 648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7 not found: ID does not exist" containerID="648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.952774 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7"} err="failed to get container status \"648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7\": rpc error: code = NotFound desc = could not find container \"648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7\": container with ID starting with 648b0f1b7867bb5fe6a652841b964331d54fca10f74d2f426e3d89d581b28ca7 not found: ID does not exist" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.952796 4845 scope.go:117] "RemoveContainer" containerID="cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.959709 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598bc44594-t4nt7"] Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.963510 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-598bc44594-t4nt7"] Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.971955 4845 scope.go:117] "RemoveContainer" containerID="cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb" Feb 02 10:35:47 crc kubenswrapper[4845]: E0202 10:35:47.972488 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb\": container with ID starting with cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb not found: ID does not exist" containerID="cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb" Feb 02 10:35:47 crc kubenswrapper[4845]: I0202 10:35:47.972537 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb"} err="failed to get container status \"cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb\": rpc error: code = NotFound desc = could not find container \"cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb\": container with ID starting with cbd30dac003bbb0eb946cba982acab9780e5a61b47e1d8afac620fceb739c7fb not found: ID does not exist" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.234339 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m"] Feb 02 10:35:48 crc kubenswrapper[4845]: E0202 10:35:48.235161 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fc0f2d-6a96-4333-b777-9149d48db9a9" containerName="controller-manager" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.235181 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fc0f2d-6a96-4333-b777-9149d48db9a9" containerName="controller-manager" Feb 02 10:35:48 crc kubenswrapper[4845]: E0202 10:35:48.235210 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21c318c-0429-4775-ac72-4556534d415e" containerName="route-controller-manager" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.235219 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21c318c-0429-4775-ac72-4556534d415e" containerName="route-controller-manager" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.235355 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21c318c-0429-4775-ac72-4556534d415e" containerName="route-controller-manager" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.235378 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fc0f2d-6a96-4333-b777-9149d48db9a9" containerName="controller-manager" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.236991 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.239362 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.240494 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.240680 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.241339 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.242110 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.242539 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.246161 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f44fddcf4-ffjng"] Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.246938 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.251658 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.252555 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.253784 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.254023 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.254272 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.255625 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.259303 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.260265 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m"] Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.266563 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f44fddcf4-ffjng"] Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.295624 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51c80373-5133-4cd9-9289-cc50013b0875-client-ca\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.295670 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c80373-5133-4cd9-9289-cc50013b0875-config\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.295715 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-serving-cert\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.295851 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crk5q\" (UniqueName: \"kubernetes.io/projected/51c80373-5133-4cd9-9289-cc50013b0875-kube-api-access-crk5q\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.295902 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtz49\" (UniqueName: \"kubernetes.io/projected/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-kube-api-access-mtz49\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.295984 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-client-ca\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.296037 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51c80373-5133-4cd9-9289-cc50013b0875-serving-cert\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.296079 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-proxy-ca-bundles\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.296126 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-config\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397551 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-client-ca\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397594 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51c80373-5133-4cd9-9289-cc50013b0875-serving-cert\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397622 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-proxy-ca-bundles\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397652 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-config\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397682 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51c80373-5133-4cd9-9289-cc50013b0875-client-ca\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397702 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c80373-5133-4cd9-9289-cc50013b0875-config\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397736 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-serving-cert\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397775 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crk5q\" (UniqueName: \"kubernetes.io/projected/51c80373-5133-4cd9-9289-cc50013b0875-kube-api-access-crk5q\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.397797 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtz49\" (UniqueName: \"kubernetes.io/projected/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-kube-api-access-mtz49\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.398630 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51c80373-5133-4cd9-9289-cc50013b0875-client-ca\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.398689 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-client-ca\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.399397 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-proxy-ca-bundles\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.399513 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-config\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.400073 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c80373-5133-4cd9-9289-cc50013b0875-config\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.402757 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51c80373-5133-4cd9-9289-cc50013b0875-serving-cert\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.405827 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-serving-cert\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.420149 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtz49\" (UniqueName: \"kubernetes.io/projected/4f7fcdd8-93f8-4b27-b281-b94eaf0ce813-kube-api-access-mtz49\") pod \"controller-manager-6f44fddcf4-ffjng\" (UID: \"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813\") " pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.420722 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crk5q\" (UniqueName: \"kubernetes.io/projected/51c80373-5133-4cd9-9289-cc50013b0875-kube-api-access-crk5q\") pod \"route-controller-manager-764c9d5fcd-lwb8m\" (UID: \"51c80373-5133-4cd9-9289-cc50013b0875\") " pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.592472 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:48 crc kubenswrapper[4845]: I0202 10:35:48.601163 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.005957 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f44fddcf4-ffjng"] Feb 02 10:35:49 crc kubenswrapper[4845]: W0202 10:35:49.010277 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7fcdd8_93f8_4b27_b281_b94eaf0ce813.slice/crio-26db9b88bbf58d42f8e069a10eaa4bb6f6b1eb2c315d1b6aa45867eec78a545d WatchSource:0}: Error finding container 26db9b88bbf58d42f8e069a10eaa4bb6f6b1eb2c315d1b6aa45867eec78a545d: Status 404 returned error can't find the container with id 26db9b88bbf58d42f8e069a10eaa4bb6f6b1eb2c315d1b6aa45867eec78a545d Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.052255 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m"] Feb 02 10:35:49 crc kubenswrapper[4845]: W0202 10:35:49.058028 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c80373_5133_4cd9_9289_cc50013b0875.slice/crio-08cd64fa4a1ed19829fc653eb7a69b66bf52a7d455fb5ee1c4669305df19c9d5 WatchSource:0}: Error finding container 08cd64fa4a1ed19829fc653eb7a69b66bf52a7d455fb5ee1c4669305df19c9d5: Status 404 returned error can't find the container with id 08cd64fa4a1ed19829fc653eb7a69b66bf52a7d455fb5ee1c4669305df19c9d5 Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.720359 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70fc0f2d-6a96-4333-b777-9149d48db9a9" path="/var/lib/kubelet/pods/70fc0f2d-6a96-4333-b777-9149d48db9a9/volumes" Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.721154 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21c318c-0429-4775-ac72-4556534d415e" path="/var/lib/kubelet/pods/c21c318c-0429-4775-ac72-4556534d415e/volumes" Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.942037 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" event={"ID":"51c80373-5133-4cd9-9289-cc50013b0875","Type":"ContainerStarted","Data":"350548c497af7726933e5093f670e11e6ffee57506d1ac026195d5e29832b064"} Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.942359 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.942369 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" event={"ID":"51c80373-5133-4cd9-9289-cc50013b0875","Type":"ContainerStarted","Data":"08cd64fa4a1ed19829fc653eb7a69b66bf52a7d455fb5ee1c4669305df19c9d5"} Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.943611 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" event={"ID":"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813","Type":"ContainerStarted","Data":"2e46816a2a756cc92ce9e3d35496bd32dccef94d25b9bdd19a168af6da2f39ab"} Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.943653 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" event={"ID":"4f7fcdd8-93f8-4b27-b281-b94eaf0ce813","Type":"ContainerStarted","Data":"26db9b88bbf58d42f8e069a10eaa4bb6f6b1eb2c315d1b6aa45867eec78a545d"} Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.947148 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.963877 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-764c9d5fcd-lwb8m" podStartSLOduration=2.963856164 podStartE2EDuration="2.963856164s" podCreationTimestamp="2026-02-02 10:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:35:49.960221768 +0000 UTC m=+231.051623228" watchObservedRunningTime="2026-02-02 10:35:49.963856164 +0000 UTC m=+231.055257624" Feb 02 10:35:49 crc kubenswrapper[4845]: I0202 10:35:49.988194 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" podStartSLOduration=3.988171294 podStartE2EDuration="3.988171294s" podCreationTimestamp="2026-02-02 10:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:35:49.984653191 +0000 UTC m=+231.076054641" watchObservedRunningTime="2026-02-02 10:35:49.988171294 +0000 UTC m=+231.079572744" Feb 02 10:35:50 crc kubenswrapper[4845]: I0202 10:35:50.949632 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:50 crc kubenswrapper[4845]: I0202 10:35:50.953795 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f44fddcf4-ffjng" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.944592 4845 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.945720 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.946051 4845 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.946560 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e" gracePeriod=15 Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.946590 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904" gracePeriod=15 Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.946675 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f" gracePeriod=15 Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.946779 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404" gracePeriod=15 Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.948687 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b" gracePeriod=15 Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.948841 4845 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:35:54 crc kubenswrapper[4845]: E0202 10:35:54.949222 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949246 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:35:54 crc kubenswrapper[4845]: E0202 10:35:54.949265 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949278 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:35:54 crc kubenswrapper[4845]: E0202 10:35:54.949297 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949310 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:35:54 crc kubenswrapper[4845]: E0202 10:35:54.949327 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949338 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:54 crc kubenswrapper[4845]: E0202 10:35:54.949357 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949369 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:54 crc kubenswrapper[4845]: E0202 10:35:54.949387 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949399 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:35:54 crc kubenswrapper[4845]: E0202 10:35:54.949413 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949424 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949599 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949625 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949641 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949658 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949677 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949696 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:35:54 crc kubenswrapper[4845]: E0202 10:35:54.949918 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.949936 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:54 crc kubenswrapper[4845]: I0202 10:35:54.950125 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.035825 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.035909 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.035971 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.036012 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.036048 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.036095 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.036166 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.036204 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.137410 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.137815 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.137851 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.137550 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.137874 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.137992 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.138034 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.138086 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.137993 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.138122 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.138145 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.138173 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.137931 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.138189 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.138211 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.138298 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.987077 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.988575 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.989323 4845 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904" exitCode=0 Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.989351 4845 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b" exitCode=0 Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.989360 4845 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404" exitCode=0 Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.989367 4845 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f" exitCode=2 Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.989456 4845 scope.go:117] "RemoveContainer" containerID="5ff3296a372879cb87edb5cfc38562c90d60096e7990fafa14c9eb8a50fba1c5" Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.991177 4845 generic.go:334] "Generic (PLEG): container finished" podID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" containerID="706922acec942089a8fd3b03c826b07b1a0b641cf0a3415a88c290fcbe0bd620" exitCode=0 Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.991207 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21f3600b-d9f0-4d8e-8a5a-2b03161164b4","Type":"ContainerDied","Data":"706922acec942089a8fd3b03c826b07b1a0b641cf0a3415a88c290fcbe0bd620"} Feb 02 10:35:55 crc kubenswrapper[4845]: I0202 10:35:55.991804 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.000402 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.334517 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.335821 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.336563 4845 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.337114 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.469725 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.469967 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.470180 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.470260 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.470370 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.470515 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.470949 4845 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.470983 4845 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.471001 4845 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.503735 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.504846 4845 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.505423 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.571445 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kube-api-access\") pod \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.571690 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-var-lock\") pod \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.571799 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kubelet-dir\") pod \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\" (UID: \"21f3600b-d9f0-4d8e-8a5a-2b03161164b4\") " Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.572082 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "21f3600b-d9f0-4d8e-8a5a-2b03161164b4" (UID: "21f3600b-d9f0-4d8e-8a5a-2b03161164b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.572111 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-var-lock" (OuterVolumeSpecName: "var-lock") pod "21f3600b-d9f0-4d8e-8a5a-2b03161164b4" (UID: "21f3600b-d9f0-4d8e-8a5a-2b03161164b4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.577075 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "21f3600b-d9f0-4d8e-8a5a-2b03161164b4" (UID: "21f3600b-d9f0-4d8e-8a5a-2b03161164b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.673767 4845 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.673811 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.673828 4845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21f3600b-d9f0-4d8e-8a5a-2b03161164b4-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 10:35:57 crc kubenswrapper[4845]: I0202 10:35:57.719569 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.010131 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21f3600b-d9f0-4d8e-8a5a-2b03161164b4","Type":"ContainerDied","Data":"ae045aee676ac5379e234ac23abc2cda69a140b263c32736f6d478088a0d6ce2"} Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.010183 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.010185 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae045aee676ac5379e234ac23abc2cda69a140b263c32736f6d478088a0d6ce2" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.013164 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.013946 4845 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e" exitCode=0 Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.014017 4845 scope.go:117] "RemoveContainer" containerID="72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.014043 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.014766 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.015153 4845 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.015492 4845 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.015757 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.017185 4845 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.017591 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.032577 4845 scope.go:117] "RemoveContainer" containerID="a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.050487 4845 scope.go:117] "RemoveContainer" containerID="58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.065269 4845 scope.go:117] "RemoveContainer" containerID="079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.081309 4845 scope.go:117] "RemoveContainer" containerID="ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.098851 4845 scope.go:117] "RemoveContainer" containerID="07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.117860 4845 scope.go:117] "RemoveContainer" containerID="72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904" Feb 02 10:35:58 crc kubenswrapper[4845]: E0202 10:35:58.118880 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\": container with ID starting with 72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904 not found: ID does not exist" containerID="72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.118925 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904"} err="failed to get container status \"72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\": rpc error: code = NotFound desc = could not find container \"72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904\": container with ID starting with 72600d42e64c46b10166b8eaa27c0cf3472aca69343fd952de501981b2c34904 not found: ID does not exist" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.118946 4845 scope.go:117] "RemoveContainer" containerID="a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b" Feb 02 10:35:58 crc kubenswrapper[4845]: E0202 10:35:58.119324 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\": container with ID starting with a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b not found: ID does not exist" containerID="a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.119347 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b"} err="failed to get container status \"a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\": rpc error: code = NotFound desc = could not find container \"a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b\": container with ID starting with a6d578bf4f9dc17f5b6a4273bab3aa60907f3144c5990669117db64ad4426b5b not found: ID does not exist" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.119360 4845 scope.go:117] "RemoveContainer" containerID="58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404" Feb 02 10:35:58 crc kubenswrapper[4845]: E0202 10:35:58.119616 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\": container with ID starting with 58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404 not found: ID does not exist" containerID="58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.119670 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404"} err="failed to get container status \"58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\": rpc error: code = NotFound desc = could not find container \"58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404\": container with ID starting with 58191ca6c2bdd6b2996155235018e669f275c0e09dcb8e463c493ed2cfe53404 not found: ID does not exist" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.119686 4845 scope.go:117] "RemoveContainer" containerID="079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f" Feb 02 10:35:58 crc kubenswrapper[4845]: E0202 10:35:58.121727 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\": container with ID starting with 079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f not found: ID does not exist" containerID="079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.121746 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f"} err="failed to get container status \"079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\": rpc error: code = NotFound desc = could not find container \"079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f\": container with ID starting with 079eded7443742e375dcd20de8d1b6b4e60a4d5de4289a483f009f0247894c8f not found: ID does not exist" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.121780 4845 scope.go:117] "RemoveContainer" containerID="ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e" Feb 02 10:35:58 crc kubenswrapper[4845]: E0202 10:35:58.122101 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\": container with ID starting with ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e not found: ID does not exist" containerID="ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.122118 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e"} err="failed to get container status \"ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\": rpc error: code = NotFound desc = could not find container \"ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e\": container with ID starting with ee7ef152f60c567afaf821367923b75eff8761f1c286a2106de41b795d37902e not found: ID does not exist" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.122130 4845 scope.go:117] "RemoveContainer" containerID="07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850" Feb 02 10:35:58 crc kubenswrapper[4845]: E0202 10:35:58.122315 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\": container with ID starting with 07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850 not found: ID does not exist" containerID="07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850" Feb 02 10:35:58 crc kubenswrapper[4845]: I0202 10:35:58.122334 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850"} err="failed to get container status \"07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\": rpc error: code = NotFound desc = could not find container \"07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850\": container with ID starting with 07753d42b1f45141fa85e04e5ef07cd89d67c405b95a52b8f64aa0fa3785a850 not found: ID does not exist" Feb 02 10:35:59 crc kubenswrapper[4845]: I0202 10:35:59.714838 4845 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:59 crc kubenswrapper[4845]: I0202 10:35:59.715713 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:35:59 crc kubenswrapper[4845]: E0202 10:35:59.984528 4845 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:35:59 crc kubenswrapper[4845]: I0202 10:35:59.985654 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:36:00 crc kubenswrapper[4845]: E0202 10:36:00.025037 4845 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18906798ea8f0bd0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:36:00.024538064 +0000 UTC m=+241.115939514,LastTimestamp:2026-02-02 10:36:00.024538064 +0000 UTC m=+241.115939514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:36:00 crc kubenswrapper[4845]: I0202 10:36:00.047374 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cee4ad5f52f15862d1d4518fd97b395ab4fe572eb89976c35570a9b62fe0884c"} Feb 02 10:36:00 crc kubenswrapper[4845]: E0202 10:36:00.423986 4845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:00 crc kubenswrapper[4845]: E0202 10:36:00.425612 4845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:00 crc kubenswrapper[4845]: E0202 10:36:00.426009 4845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:00 crc kubenswrapper[4845]: E0202 10:36:00.426268 4845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:00 crc kubenswrapper[4845]: E0202 10:36:00.426493 4845 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:00 crc kubenswrapper[4845]: I0202 10:36:00.426526 4845 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 10:36:00 crc kubenswrapper[4845]: E0202 10:36:00.426742 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="200ms" Feb 02 10:36:00 crc kubenswrapper[4845]: E0202 10:36:00.627900 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="400ms" Feb 02 10:36:01 crc kubenswrapper[4845]: E0202 10:36:01.028567 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="800ms" Feb 02 10:36:01 crc kubenswrapper[4845]: I0202 10:36:01.054953 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446"} Feb 02 10:36:01 crc kubenswrapper[4845]: I0202 10:36:01.055710 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:01 crc kubenswrapper[4845]: E0202 10:36:01.055745 4845 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:36:01 crc kubenswrapper[4845]: E0202 10:36:01.830389 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="1.6s" Feb 02 10:36:02 crc kubenswrapper[4845]: E0202 10:36:02.064578 4845 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:36:03 crc kubenswrapper[4845]: E0202 10:36:03.431463 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="3.2s" Feb 02 10:36:05 crc kubenswrapper[4845]: E0202 10:36:05.872049 4845 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18906798ea8f0bd0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:36:00.024538064 +0000 UTC m=+241.115939514,LastTimestamp:2026-02-02 10:36:00.024538064 +0000 UTC m=+241.115939514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:36:06 crc kubenswrapper[4845]: E0202 10:36:06.632365 4845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="6.4s" Feb 02 10:36:08 crc kubenswrapper[4845]: I0202 10:36:08.711689 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:08 crc kubenswrapper[4845]: I0202 10:36:08.713981 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:08 crc kubenswrapper[4845]: I0202 10:36:08.729941 4845 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:08 crc kubenswrapper[4845]: I0202 10:36:08.729980 4845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:08 crc kubenswrapper[4845]: E0202 10:36:08.730496 4845 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:08 crc kubenswrapper[4845]: I0202 10:36:08.731151 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:08 crc kubenswrapper[4845]: W0202 10:36:08.752559 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-16759ce193aa3d4b5fec2e2cfc9415b9df22f822bca3fc667eeab2415abf27c4 WatchSource:0}: Error finding container 16759ce193aa3d4b5fec2e2cfc9415b9df22f822bca3fc667eeab2415abf27c4: Status 404 returned error can't find the container with id 16759ce193aa3d4b5fec2e2cfc9415b9df22f822bca3fc667eeab2415abf27c4 Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.107758 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.108013 4845 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf" exitCode=1 Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.108062 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf"} Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.108538 4845 scope.go:117] "RemoveContainer" containerID="60395e5b5c5225bf27cc1401c84b57081cb08f8b76c5f8ea9f0e472c2d37abdf" Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.109215 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.109574 4845 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.110762 4845 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3752fb9bb94368a7ec84e8e8ccd1f9ab95621ec384cbde95acb1893f614bdde2" exitCode=0 Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.110782 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3752fb9bb94368a7ec84e8e8ccd1f9ab95621ec384cbde95acb1893f614bdde2"} Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.110798 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"16759ce193aa3d4b5fec2e2cfc9415b9df22f822bca3fc667eeab2415abf27c4"} Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.110993 4845 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.111010 4845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.111418 4845 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:09 crc kubenswrapper[4845]: E0202 10:36:09.111542 4845 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.111616 4845 status_manager.go:851] "Failed to get status for pod" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Feb 02 10:36:09 crc kubenswrapper[4845]: I0202 10:36:09.230940 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:36:10 crc kubenswrapper[4845]: I0202 10:36:10.121564 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 10:36:10 crc kubenswrapper[4845]: I0202 10:36:10.122049 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"06cca458fbbab7cb6f7429a24188d88f05f7680c5bd0a1399a150c84889692ba"} Feb 02 10:36:10 crc kubenswrapper[4845]: I0202 10:36:10.125549 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31650c369a34fb4b7c02e1dfba30243032457402224de23d3fb3de47ea29a1e2"} Feb 02 10:36:10 crc kubenswrapper[4845]: I0202 10:36:10.125589 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"81e6f74786af2bcab9994da100052bfe310fb2dceb31841baae11dd472280515"} Feb 02 10:36:10 crc kubenswrapper[4845]: I0202 10:36:10.125603 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e04b720f7623a4f32efd79a47e802542478ff82153780713d07d829d51384186"} Feb 02 10:36:11 crc kubenswrapper[4845]: I0202 10:36:11.134816 4845 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:11 crc kubenswrapper[4845]: I0202 10:36:11.135081 4845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:11 crc kubenswrapper[4845]: I0202 10:36:11.135002 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a1a2e0959b946db6c2a17d60afb9d27f63f74d2f9576d5e12807882710f7978c"} Feb 02 10:36:11 crc kubenswrapper[4845]: I0202 10:36:11.135125 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7537b27493878f8bf00411d64cbadcfe74b4d897814ea7d5a7b3c16b1ab6692c"} Feb 02 10:36:11 crc kubenswrapper[4845]: I0202 10:36:11.135149 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:13 crc kubenswrapper[4845]: I0202 10:36:13.731961 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:13 crc kubenswrapper[4845]: I0202 10:36:13.732443 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:13 crc kubenswrapper[4845]: I0202 10:36:13.741377 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:14 crc kubenswrapper[4845]: I0202 10:36:14.178464 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:36:16 crc kubenswrapper[4845]: I0202 10:36:16.154166 4845 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:16 crc kubenswrapper[4845]: I0202 10:36:16.168168 4845 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d115014f-e828-4339-b181-ebb8a9f6d3cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e04b720f7623a4f32efd79a47e802542478ff82153780713d07d829d51384186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31650c369a34fb4b7c02e1dfba30243032457402224de23d3fb3de47ea29a1e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e6f74786af2bcab9994da100052bfe310fb2dceb31841baae11dd472280515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7537b27493878f8bf00411d64cbadcfe74b4d897814ea7d5a7b3c16b1ab6692c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:36:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a2e0959b946db6c2a17d60afb9d27f63f74d2f9576d5e12807882710f7978c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:36:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Feb 02 10:36:17 crc kubenswrapper[4845]: I0202 10:36:17.178600 4845 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:17 crc kubenswrapper[4845]: I0202 10:36:17.178645 4845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:17 crc kubenswrapper[4845]: I0202 10:36:17.187016 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:17 crc kubenswrapper[4845]: I0202 10:36:17.191265 4845 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="414c7d44-51dd-4046-b309-84e94419d37b" Feb 02 10:36:18 crc kubenswrapper[4845]: I0202 10:36:18.185267 4845 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:18 crc kubenswrapper[4845]: I0202 10:36:18.185316 4845 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d115014f-e828-4339-b181-ebb8a9f6d3cf" Feb 02 10:36:19 crc kubenswrapper[4845]: I0202 10:36:19.230931 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:36:19 crc kubenswrapper[4845]: I0202 10:36:19.239233 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:36:19 crc kubenswrapper[4845]: I0202 10:36:19.743028 4845 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="414c7d44-51dd-4046-b309-84e94419d37b" Feb 02 10:36:20 crc kubenswrapper[4845]: I0202 10:36:20.206187 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:36:25 crc kubenswrapper[4845]: I0202 10:36:25.611427 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 10:36:26 crc kubenswrapper[4845]: I0202 10:36:26.247834 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 10:36:26 crc kubenswrapper[4845]: I0202 10:36:26.467345 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 10:36:26 crc kubenswrapper[4845]: I0202 10:36:26.665067 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 10:36:26 crc kubenswrapper[4845]: I0202 10:36:26.792730 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:36:26 crc kubenswrapper[4845]: I0202 10:36:26.974274 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:36:27 crc kubenswrapper[4845]: I0202 10:36:27.053183 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 10:36:27 crc kubenswrapper[4845]: I0202 10:36:27.085541 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:36:27 crc kubenswrapper[4845]: I0202 10:36:27.225097 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 10:36:27 crc kubenswrapper[4845]: I0202 10:36:27.348802 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:36:27 crc kubenswrapper[4845]: I0202 10:36:27.782448 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 10:36:27 crc kubenswrapper[4845]: I0202 10:36:27.948020 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 10:36:27 crc kubenswrapper[4845]: I0202 10:36:27.991941 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.136020 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.373614 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.382953 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.522362 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.566050 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.586300 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.825571 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.832934 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 10:36:28 crc kubenswrapper[4845]: I0202 10:36:28.929797 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:36:29 crc kubenswrapper[4845]: I0202 10:36:29.205766 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 10:36:29 crc kubenswrapper[4845]: I0202 10:36:29.393683 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:36:29 crc kubenswrapper[4845]: I0202 10:36:29.569441 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 10:36:29 crc kubenswrapper[4845]: I0202 10:36:29.678170 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 10:36:29 crc kubenswrapper[4845]: I0202 10:36:29.682657 4845 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 10:36:29 crc kubenswrapper[4845]: I0202 10:36:29.902472 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 10:36:29 crc kubenswrapper[4845]: I0202 10:36:29.922386 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 10:36:29 crc kubenswrapper[4845]: I0202 10:36:29.941353 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.024146 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.048863 4845 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.232403 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.307961 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.325480 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.350698 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.399042 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.468378 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.495003 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.497995 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.534361 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.606433 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.655237 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.717803 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.783871 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.845003 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 10:36:30 crc kubenswrapper[4845]: I0202 10:36:30.914003 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.139576 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.241323 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.273465 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.343503 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.357257 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.471446 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.673977 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.798043 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.832800 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.866963 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.875372 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.882170 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.882564 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.890304 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.984118 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:36:31 crc kubenswrapper[4845]: I0202 10:36:31.993691 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.001702 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.035184 4845 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.134924 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.136542 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.185109 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.295958 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.370411 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.416337 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.437442 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.499856 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.569129 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.654692 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.658712 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.773530 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.789359 4845 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.800466 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.800565 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.812088 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.815496 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.834683 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.834660456 podStartE2EDuration="16.834660456s" podCreationTimestamp="2026-02-02 10:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:36:32.831842364 +0000 UTC m=+273.923243824" watchObservedRunningTime="2026-02-02 10:36:32.834660456 +0000 UTC m=+273.926061916" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.944661 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 10:36:32 crc kubenswrapper[4845]: I0202 10:36:32.979068 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.030270 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.066130 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.086797 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.123292 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.163150 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.167956 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.172182 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.180403 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.334595 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.367315 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.381753 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.456168 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.459911 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.512344 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.605756 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.657211 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.674496 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.704586 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.741601 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.851856 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.894388 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.913250 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 10:36:33 crc kubenswrapper[4845]: I0202 10:36:33.955642 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.210092 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.273251 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.325880 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.419657 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.451934 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.527952 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.626177 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.668645 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.672410 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.749530 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 10:36:34 crc kubenswrapper[4845]: I0202 10:36:34.989666 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.002850 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.020780 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.026770 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.065500 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.070848 4845 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.144062 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.146016 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.207727 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.269616 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.384726 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.444374 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.444441 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.444564 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.479179 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.482650 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.500081 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.518098 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.565246 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.582120 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.584588 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.585073 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.602620 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.617386 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.755764 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.800203 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.870359 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 10:36:35 crc kubenswrapper[4845]: I0202 10:36:35.894648 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.041432 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.244461 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.335707 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.371557 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.385650 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.470933 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.533880 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.539722 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.580019 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.634728 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.640346 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.649374 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.659649 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.695092 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.713697 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.762320 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.763133 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.828429 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.909588 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 10:36:36 crc kubenswrapper[4845]: I0202 10:36:36.999254 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.025972 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.074076 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.085532 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.129740 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.160541 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.313851 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.320744 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.355249 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.356482 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.377305 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.386507 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.387215 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.391355 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.397147 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.410965 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.470686 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.558222 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.711934 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.722892 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.729683 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 10:36:37 crc kubenswrapper[4845]: I0202 10:36:37.947213 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.028270 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.059263 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.219281 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.243442 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.249148 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.267547 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.292054 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.301021 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.366158 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.500747 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.512779 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.703802 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.712621 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.737015 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.752841 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.808789 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.938099 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.976161 4845 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:36:38 crc kubenswrapper[4845]: I0202 10:36:38.976468 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446" gracePeriod=5 Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.012630 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.098663 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.115396 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.146454 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.211165 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.313011 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.460613 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.674068 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.725534 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.746085 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.772081 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 10:36:39 crc kubenswrapper[4845]: I0202 10:36:39.803579 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.014878 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.156364 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.160229 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.269930 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.274968 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.326041 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.333795 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.340196 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.351015 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.395618 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.512268 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.529129 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.610419 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.645998 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.702434 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.715733 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.952396 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:36:40 crc kubenswrapper[4845]: I0202 10:36:40.990849 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.026450 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.059455 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.206363 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.395614 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.532861 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.585081 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.695094 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.747640 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.762572 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nqqx9"] Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.762902 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nqqx9" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="registry-server" containerID="cri-o://5769d4596805c8a147c91069eb2109528eac8714acdda17a418b761289524bf3" gracePeriod=30 Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.775894 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-srnzq"] Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.776130 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-srnzq" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerName="registry-server" containerID="cri-o://68d3140ebd20281e9eddc75449b750e9027bcd03b8ff2f48c1cab9686d75572d" gracePeriod=30 Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.792042 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hr44j"] Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.792323 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" podUID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" containerName="marketplace-operator" containerID="cri-o://6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6" gracePeriod=30 Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.805931 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf5wp"] Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.806205 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xf5wp" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerName="registry-server" containerID="cri-o://692c027aeff58a06f776073818a159fb8a42b2840f14b77a4b11565409dfc3c8" gracePeriod=30 Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.816310 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nb8f9"] Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.817213 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nb8f9" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerName="registry-server" containerID="cri-o://bbcd1edbd954385c8b6a4defc894e861ef2b8ac66447cad7befe95fddedb8599" gracePeriod=30 Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.855765 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ms22s"] Feb 02 10:36:41 crc kubenswrapper[4845]: E0202 10:36:41.855993 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.856005 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:36:41 crc kubenswrapper[4845]: E0202 10:36:41.856021 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" containerName="installer" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.856028 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" containerName="installer" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.856115 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.856129 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f3600b-d9f0-4d8e-8a5a-2b03161164b4" containerName="installer" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.856472 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.867205 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ms22s"] Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.902058 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.927798 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:36:41 crc kubenswrapper[4845]: I0202 10:36:41.935451 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.030878 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.045002 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdgdc\" (UniqueName: \"kubernetes.io/projected/9fc452cb-0731-44f6-aae8-bad730786d8a-kube-api-access-vdgdc\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.045094 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9fc452cb-0731-44f6-aae8-bad730786d8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.045169 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9fc452cb-0731-44f6-aae8-bad730786d8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.064710 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.078071 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.150796 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9fc452cb-0731-44f6-aae8-bad730786d8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.151161 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9fc452cb-0731-44f6-aae8-bad730786d8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.151190 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdgdc\" (UniqueName: \"kubernetes.io/projected/9fc452cb-0731-44f6-aae8-bad730786d8a-kube-api-access-vdgdc\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.153701 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9fc452cb-0731-44f6-aae8-bad730786d8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.156393 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9fc452cb-0731-44f6-aae8-bad730786d8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.168386 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdgdc\" (UniqueName: \"kubernetes.io/projected/9fc452cb-0731-44f6-aae8-bad730786d8a-kube-api-access-vdgdc\") pod \"marketplace-operator-79b997595-ms22s\" (UID: \"9fc452cb-0731-44f6-aae8-bad730786d8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.208479 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.235454 4845 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.287256 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.353873 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-trusted-ca\") pod \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.353968 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk7lz\" (UniqueName: \"kubernetes.io/projected/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-kube-api-access-mk7lz\") pod \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.354073 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-operator-metrics\") pod \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\" (UID: \"d2ddc114-bfc4-444f-aeb3-8d43d95bec09\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.353970 4845 generic.go:334] "Generic (PLEG): container finished" podID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerID="692c027aeff58a06f776073818a159fb8a42b2840f14b77a4b11565409dfc3c8" exitCode=0 Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.354259 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf5wp" event={"ID":"3ceca4a8-b0dd-47cc-a1fe-818e984af772","Type":"ContainerDied","Data":"692c027aeff58a06f776073818a159fb8a42b2840f14b77a4b11565409dfc3c8"} Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.354549 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d2ddc114-bfc4-444f-aeb3-8d43d95bec09" (UID: "d2ddc114-bfc4-444f-aeb3-8d43d95bec09"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.364717 4845 generic.go:334] "Generic (PLEG): container finished" podID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerID="5769d4596805c8a147c91069eb2109528eac8714acdda17a418b761289524bf3" exitCode=0 Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.365086 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqx9" event={"ID":"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06","Type":"ContainerDied","Data":"5769d4596805c8a147c91069eb2109528eac8714acdda17a418b761289524bf3"} Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.368101 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d2ddc114-bfc4-444f-aeb3-8d43d95bec09" (UID: "d2ddc114-bfc4-444f-aeb3-8d43d95bec09"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.370539 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-kube-api-access-mk7lz" (OuterVolumeSpecName: "kube-api-access-mk7lz") pod "d2ddc114-bfc4-444f-aeb3-8d43d95bec09" (UID: "d2ddc114-bfc4-444f-aeb3-8d43d95bec09"). InnerVolumeSpecName "kube-api-access-mk7lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.380267 4845 generic.go:334] "Generic (PLEG): container finished" podID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" containerID="6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6" exitCode=0 Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.380357 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" event={"ID":"d2ddc114-bfc4-444f-aeb3-8d43d95bec09","Type":"ContainerDied","Data":"6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6"} Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.380385 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" event={"ID":"d2ddc114-bfc4-444f-aeb3-8d43d95bec09","Type":"ContainerDied","Data":"a4725b8556aef79f1893fb492aa385b87083299f9e257b7881345d1bcb5c2f73"} Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.380400 4845 scope.go:117] "RemoveContainer" containerID="6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.380507 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hr44j" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.396277 4845 generic.go:334] "Generic (PLEG): container finished" podID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerID="68d3140ebd20281e9eddc75449b750e9027bcd03b8ff2f48c1cab9686d75572d" exitCode=0 Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.396363 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srnzq" event={"ID":"b3624e54-1097-4ab1-bfff-d7e0f721f8f0","Type":"ContainerDied","Data":"68d3140ebd20281e9eddc75449b750e9027bcd03b8ff2f48c1cab9686d75572d"} Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.405188 4845 generic.go:334] "Generic (PLEG): container finished" podID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerID="bbcd1edbd954385c8b6a4defc894e861ef2b8ac66447cad7befe95fddedb8599" exitCode=0 Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.405229 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nb8f9" event={"ID":"66911d31-17db-4d9e-b0c2-9cb699fc0778","Type":"ContainerDied","Data":"bbcd1edbd954385c8b6a4defc894e861ef2b8ac66447cad7befe95fddedb8599"} Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.416009 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hr44j"] Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.420632 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hr44j"] Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.420772 4845 scope.go:117] "RemoveContainer" containerID="6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6" Feb 02 10:36:42 crc kubenswrapper[4845]: E0202 10:36:42.423313 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6\": container with ID starting with 6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6 not found: ID does not exist" containerID="6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.423349 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6"} err="failed to get container status \"6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6\": rpc error: code = NotFound desc = could not find container \"6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6\": container with ID starting with 6ddec9957fc622ddd7645307da14e15fc1f9260e65077226b0be1226c3e80fb6 not found: ID does not exist" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.455833 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk7lz\" (UniqueName: \"kubernetes.io/projected/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-kube-api-access-mk7lz\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.455864 4845 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.455877 4845 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2ddc114-bfc4-444f-aeb3-8d43d95bec09-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.467398 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.481099 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.484053 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.526372 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.587178 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.639204 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670516 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-catalog-content\") pod \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670592 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8ql7\" (UniqueName: \"kubernetes.io/projected/3ceca4a8-b0dd-47cc-a1fe-818e984af772-kube-api-access-c8ql7\") pod \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670614 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-catalog-content\") pod \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670633 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb5dl\" (UniqueName: \"kubernetes.io/projected/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-kube-api-access-mb5dl\") pod \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670665 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-utilities\") pod \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\" (UID: \"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670695 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-utilities\") pod \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670710 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-utilities\") pod \"66911d31-17db-4d9e-b0c2-9cb699fc0778\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670725 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-catalog-content\") pod \"66911d31-17db-4d9e-b0c2-9cb699fc0778\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670742 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-catalog-content\") pod \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670777 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-utilities\") pod \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\" (UID: \"3ceca4a8-b0dd-47cc-a1fe-818e984af772\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670801 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg7pg\" (UniqueName: \"kubernetes.io/projected/66911d31-17db-4d9e-b0c2-9cb699fc0778-kube-api-access-fg7pg\") pod \"66911d31-17db-4d9e-b0c2-9cb699fc0778\" (UID: \"66911d31-17db-4d9e-b0c2-9cb699fc0778\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.670819 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55ffd\" (UniqueName: \"kubernetes.io/projected/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-kube-api-access-55ffd\") pod \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\" (UID: \"b3624e54-1097-4ab1-bfff-d7e0f721f8f0\") " Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.671378 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-utilities" (OuterVolumeSpecName: "utilities") pod "66911d31-17db-4d9e-b0c2-9cb699fc0778" (UID: "66911d31-17db-4d9e-b0c2-9cb699fc0778"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.671921 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-utilities" (OuterVolumeSpecName: "utilities") pod "b3624e54-1097-4ab1-bfff-d7e0f721f8f0" (UID: "b3624e54-1097-4ab1-bfff-d7e0f721f8f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.672027 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-utilities" (OuterVolumeSpecName: "utilities") pod "fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" (UID: "fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.672534 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-utilities" (OuterVolumeSpecName: "utilities") pod "3ceca4a8-b0dd-47cc-a1fe-818e984af772" (UID: "3ceca4a8-b0dd-47cc-a1fe-818e984af772"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.674274 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ceca4a8-b0dd-47cc-a1fe-818e984af772-kube-api-access-c8ql7" (OuterVolumeSpecName: "kube-api-access-c8ql7") pod "3ceca4a8-b0dd-47cc-a1fe-818e984af772" (UID: "3ceca4a8-b0dd-47cc-a1fe-818e984af772"). InnerVolumeSpecName "kube-api-access-c8ql7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.674309 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66911d31-17db-4d9e-b0c2-9cb699fc0778-kube-api-access-fg7pg" (OuterVolumeSpecName: "kube-api-access-fg7pg") pod "66911d31-17db-4d9e-b0c2-9cb699fc0778" (UID: "66911d31-17db-4d9e-b0c2-9cb699fc0778"). InnerVolumeSpecName "kube-api-access-fg7pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.674841 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-kube-api-access-mb5dl" (OuterVolumeSpecName: "kube-api-access-mb5dl") pod "fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" (UID: "fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06"). InnerVolumeSpecName "kube-api-access-mb5dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.688361 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-kube-api-access-55ffd" (OuterVolumeSpecName: "kube-api-access-55ffd") pod "b3624e54-1097-4ab1-bfff-d7e0f721f8f0" (UID: "b3624e54-1097-4ab1-bfff-d7e0f721f8f0"). InnerVolumeSpecName "kube-api-access-55ffd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.705015 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ceca4a8-b0dd-47cc-a1fe-818e984af772" (UID: "3ceca4a8-b0dd-47cc-a1fe-818e984af772"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.728956 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" (UID: "fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.734745 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.734920 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3624e54-1097-4ab1-bfff-d7e0f721f8f0" (UID: "b3624e54-1097-4ab1-bfff-d7e0f721f8f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.762518 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ms22s"] Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772533 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772590 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg7pg\" (UniqueName: \"kubernetes.io/projected/66911d31-17db-4d9e-b0c2-9cb699fc0778-kube-api-access-fg7pg\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772604 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55ffd\" (UniqueName: \"kubernetes.io/projected/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-kube-api-access-55ffd\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772616 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772628 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8ql7\" (UniqueName: \"kubernetes.io/projected/3ceca4a8-b0dd-47cc-a1fe-818e984af772-kube-api-access-c8ql7\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772638 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ceca4a8-b0dd-47cc-a1fe-818e984af772-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772649 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb5dl\" (UniqueName: \"kubernetes.io/projected/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-kube-api-access-mb5dl\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772660 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772671 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772681 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.772691 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3624e54-1097-4ab1-bfff-d7e0f721f8f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.775395 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.810873 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66911d31-17db-4d9e-b0c2-9cb699fc0778" (UID: "66911d31-17db-4d9e-b0c2-9cb699fc0778"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:36:42 crc kubenswrapper[4845]: I0202 10:36:42.873467 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66911d31-17db-4d9e-b0c2-9cb699fc0778-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.351183 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.413247 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf5wp" event={"ID":"3ceca4a8-b0dd-47cc-a1fe-818e984af772","Type":"ContainerDied","Data":"0d84e03378117188ed8428e61cf660af9bf78e4894f24fa58ab65c169ebc8078"} Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.413305 4845 scope.go:117] "RemoveContainer" containerID="692c027aeff58a06f776073818a159fb8a42b2840f14b77a4b11565409dfc3c8" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.413608 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf5wp" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.416462 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqx9" event={"ID":"fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06","Type":"ContainerDied","Data":"411d114dec4634d8215b7fca2758294946426abdcb66d052869b2ccdb984e078"} Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.416625 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqqx9" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.420447 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-srnzq" event={"ID":"b3624e54-1097-4ab1-bfff-d7e0f721f8f0","Type":"ContainerDied","Data":"7489421b9d82561c3c59320133d54302d24308214065fb740dafa4f42a2056e8"} Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.420545 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-srnzq" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.423141 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" event={"ID":"9fc452cb-0731-44f6-aae8-bad730786d8a","Type":"ContainerStarted","Data":"880ea4fc915d7976764e84ba40b8dd836fe5217f2abeca4ef742fe07b59cbdf7"} Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.423177 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" event={"ID":"9fc452cb-0731-44f6-aae8-bad730786d8a","Type":"ContainerStarted","Data":"272368097c4ce28ad0896f9e461d21f2c134747a55110be66da3698f46d15052"} Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.423349 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.426071 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.426355 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nb8f9" event={"ID":"66911d31-17db-4d9e-b0c2-9cb699fc0778","Type":"ContainerDied","Data":"16c241a98cc49754e7cd69effe7e44d81157009a11eabff0c36f137b39003b4c"} Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.426453 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nb8f9" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.429744 4845 scope.go:117] "RemoveContainer" containerID="9de0fd1bbbe5a9cccba7aa175bf6868c3db78e2836e4127189cb452f9838b8c4" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.438066 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ms22s" podStartSLOduration=2.437978281 podStartE2EDuration="2.437978281s" podCreationTimestamp="2026-02-02 10:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:36:43.437652972 +0000 UTC m=+284.529054462" watchObservedRunningTime="2026-02-02 10:36:43.437978281 +0000 UTC m=+284.529379731" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.466586 4845 scope.go:117] "RemoveContainer" containerID="df3414b89201cc98711df2db8d1c899c06cc968a324e0b8467c7e36e96868e51" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.484671 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-srnzq"] Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.492011 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-srnzq"] Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.497843 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nqqx9"] Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.504317 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nqqx9"] Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.507506 4845 scope.go:117] "RemoveContainer" containerID="5769d4596805c8a147c91069eb2109528eac8714acdda17a418b761289524bf3" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.511408 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf5wp"] Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.517111 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf5wp"] Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.535116 4845 scope.go:117] "RemoveContainer" containerID="b1fd42c06508a5f7cc77e759a154a4b98880587d65c3430e3b9c376f3441f3e5" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.541846 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nb8f9"] Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.546214 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nb8f9"] Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.552359 4845 scope.go:117] "RemoveContainer" containerID="9aa6e5661ba4da1e3f07e47952d04aef1b22875db8155df2a2c40d59b58e9f5c" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.569034 4845 scope.go:117] "RemoveContainer" containerID="68d3140ebd20281e9eddc75449b750e9027bcd03b8ff2f48c1cab9686d75572d" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.599175 4845 scope.go:117] "RemoveContainer" containerID="5a24fa9c118523cf61e0979a46080c1c45d6df89bfac951d244d917947874a3c" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.621479 4845 scope.go:117] "RemoveContainer" containerID="bce7fc681aa538cb76a68f4632435369a39f67dafd5cfa64c4b02c6ca032214c" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.633750 4845 scope.go:117] "RemoveContainer" containerID="bbcd1edbd954385c8b6a4defc894e861ef2b8ac66447cad7befe95fddedb8599" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.647606 4845 scope.go:117] "RemoveContainer" containerID="892e8dde5cc61beb8bab363fa98a836495168d8b933253d6114935636c04fbe1" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.659734 4845 scope.go:117] "RemoveContainer" containerID="ce02065ee7ea69eff70e052c4a44aab42be4dcba45489f2ca7fc1dcf482f400b" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.719045 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" path="/var/lib/kubelet/pods/3ceca4a8-b0dd-47cc-a1fe-818e984af772/volumes" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.720272 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" path="/var/lib/kubelet/pods/66911d31-17db-4d9e-b0c2-9cb699fc0778/volumes" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.721380 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" path="/var/lib/kubelet/pods/b3624e54-1097-4ab1-bfff-d7e0f721f8f0/volumes" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.723295 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" path="/var/lib/kubelet/pods/d2ddc114-bfc4-444f-aeb3-8d43d95bec09/volumes" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.724205 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" path="/var/lib/kubelet/pods/fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06/volumes" Feb 02 10:36:43 crc kubenswrapper[4845]: I0202 10:36:43.812935 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.066535 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.097006 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.097066 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189137 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189206 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189245 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189314 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189371 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189390 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189396 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189451 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189783 4845 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189796 4845 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189806 4845 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.189830 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.197540 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.291259 4845 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.291670 4845 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.450212 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.450656 4845 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446" exitCode=137 Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.450764 4845 scope.go:117] "RemoveContainer" containerID="52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.450938 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.500741 4845 scope.go:117] "RemoveContainer" containerID="52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446" Feb 02 10:36:44 crc kubenswrapper[4845]: E0202 10:36:44.504258 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446\": container with ID starting with 52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446 not found: ID does not exist" containerID="52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446" Feb 02 10:36:44 crc kubenswrapper[4845]: I0202 10:36:44.504476 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446"} err="failed to get container status \"52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446\": rpc error: code = NotFound desc = could not find container \"52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446\": container with ID starting with 52339319d77de91a7e9fe862ac17080ccdec88bb416fc929a78525d869b63446 not found: ID does not exist" Feb 02 10:36:45 crc kubenswrapper[4845]: I0202 10:36:45.722189 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 10:36:59 crc kubenswrapper[4845]: I0202 10:36:59.462125 4845 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.542250 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj"] Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.542972 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.542986 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543002 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerName="extract-content" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543009 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerName="extract-content" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543019 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543026 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543038 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" containerName="marketplace-operator" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543044 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" containerName="marketplace-operator" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543053 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerName="extract-utilities" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543058 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerName="extract-utilities" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543067 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerName="extract-utilities" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543074 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerName="extract-utilities" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543082 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="extract-content" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543089 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="extract-content" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543097 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerName="extract-content" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543104 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerName="extract-content" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543112 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543118 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543129 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerName="extract-utilities" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543161 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerName="extract-utilities" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543173 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="extract-utilities" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543180 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="extract-utilities" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543190 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerName="extract-content" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543197 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerName="extract-content" Feb 02 10:37:13 crc kubenswrapper[4845]: E0202 10:37:13.543207 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543215 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543311 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceca4a8-b0dd-47cc-a1fe-818e984af772" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543324 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ddc114-bfc4-444f-aeb3-8d43d95bec09" containerName="marketplace-operator" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543330 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3624e54-1097-4ab1-bfff-d7e0f721f8f0" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543341 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd2ac18-0c6c-41fe-a9e8-423d6a94ce06" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543348 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="66911d31-17db-4d9e-b0c2-9cb699fc0778" containerName="registry-server" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.543718 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.546872 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.547655 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.550346 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.552265 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.552327 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.557778 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj"] Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.656556 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4cc1851-2948-4983-81e0-70137f12c223-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.656751 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f4cc1851-2948-4983-81e0-70137f12c223-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.656830 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcr7h\" (UniqueName: \"kubernetes.io/projected/f4cc1851-2948-4983-81e0-70137f12c223-kube-api-access-vcr7h\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.758738 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f4cc1851-2948-4983-81e0-70137f12c223-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.758826 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcr7h\" (UniqueName: \"kubernetes.io/projected/f4cc1851-2948-4983-81e0-70137f12c223-kube-api-access-vcr7h\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.758865 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4cc1851-2948-4983-81e0-70137f12c223-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.760857 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f4cc1851-2948-4983-81e0-70137f12c223-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.775130 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4cc1851-2948-4983-81e0-70137f12c223-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.777626 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcr7h\" (UniqueName: \"kubernetes.io/projected/f4cc1851-2948-4983-81e0-70137f12c223-kube-api-access-vcr7h\") pod \"cluster-monitoring-operator-6d5b84845-bzxrj\" (UID: \"f4cc1851-2948-4983-81e0-70137f12c223\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:13 crc kubenswrapper[4845]: I0202 10:37:13.862154 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" Feb 02 10:37:14 crc kubenswrapper[4845]: I0202 10:37:14.318740 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj"] Feb 02 10:37:14 crc kubenswrapper[4845]: I0202 10:37:14.631562 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" event={"ID":"f4cc1851-2948-4983-81e0-70137f12c223","Type":"ContainerStarted","Data":"011698660f79c3f5901b8e6ea7cd5d8421033a50faaf212b382e44364a883db4"} Feb 02 10:37:17 crc kubenswrapper[4845]: I0202 10:37:17.652352 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" event={"ID":"f4cc1851-2948-4983-81e0-70137f12c223","Type":"ContainerStarted","Data":"f0e3afe6cb5d06975889cb71bb7fc76d5f9f7e46e9ee6c91e9065fe146ed995a"} Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.065959 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-bzxrj" podStartSLOduration=1.8998176180000002 podStartE2EDuration="5.065942749s" podCreationTimestamp="2026-02-02 10:37:13 +0000 UTC" firstStartedPulling="2026-02-02 10:37:14.328062483 +0000 UTC m=+315.419463943" lastFinishedPulling="2026-02-02 10:37:17.494187624 +0000 UTC m=+318.585589074" observedRunningTime="2026-02-02 10:37:17.676753481 +0000 UTC m=+318.768154931" watchObservedRunningTime="2026-02-02 10:37:18.065942749 +0000 UTC m=+319.157344199" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.068026 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl"] Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.068693 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.070297 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.077540 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl"] Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.098577 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-txvzb"] Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.099615 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.114919 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/34701544-b4ad-4596-90fe-5783d7decc81-ca-trust-extracted\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.115000 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.115049 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg7fx\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-kube-api-access-hg7fx\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.115074 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8zgnl\" (UID: \"62e18897-4517-49ff-8a99-6a4447fa6a1e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.115133 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/34701544-b4ad-4596-90fe-5783d7decc81-installation-pull-secrets\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.115149 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-bound-sa-token\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.115170 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/34701544-b4ad-4596-90fe-5783d7decc81-registry-certificates\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.115191 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-registry-tls\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.115211 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34701544-b4ad-4596-90fe-5783d7decc81-trusted-ca\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.118549 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-txvzb"] Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.140527 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.216524 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8zgnl\" (UID: \"62e18897-4517-49ff-8a99-6a4447fa6a1e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.216602 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/34701544-b4ad-4596-90fe-5783d7decc81-installation-pull-secrets\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.216622 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-bound-sa-token\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: E0202 10:37:18.216639 4845 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:18 crc kubenswrapper[4845]: E0202 10:37:18.216699 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates podName:62e18897-4517-49ff-8a99-6a4447fa6a1e nodeName:}" failed. No retries permitted until 2026-02-02 10:37:18.716683288 +0000 UTC m=+319.808084738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-8zgnl" (UID: "62e18897-4517-49ff-8a99-6a4447fa6a1e") : secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.216646 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/34701544-b4ad-4596-90fe-5783d7decc81-registry-certificates\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.216761 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-registry-tls\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.216785 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34701544-b4ad-4596-90fe-5783d7decc81-trusted-ca\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.216810 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/34701544-b4ad-4596-90fe-5783d7decc81-ca-trust-extracted\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.216843 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg7fx\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-kube-api-access-hg7fx\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.217453 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/34701544-b4ad-4596-90fe-5783d7decc81-ca-trust-extracted\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.217986 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/34701544-b4ad-4596-90fe-5783d7decc81-registry-certificates\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.218124 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34701544-b4ad-4596-90fe-5783d7decc81-trusted-ca\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.222573 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/34701544-b4ad-4596-90fe-5783d7decc81-installation-pull-secrets\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.223773 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-registry-tls\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.234490 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-bound-sa-token\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.240634 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg7fx\" (UniqueName: \"kubernetes.io/projected/34701544-b4ad-4596-90fe-5783d7decc81-kube-api-access-hg7fx\") pod \"image-registry-66df7c8f76-txvzb\" (UID: \"34701544-b4ad-4596-90fe-5783d7decc81\") " pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.419685 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.722690 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8zgnl\" (UID: \"62e18897-4517-49ff-8a99-6a4447fa6a1e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:18 crc kubenswrapper[4845]: E0202 10:37:18.722938 4845 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:18 crc kubenswrapper[4845]: E0202 10:37:18.723204 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates podName:62e18897-4517-49ff-8a99-6a4447fa6a1e nodeName:}" failed. No retries permitted until 2026-02-02 10:37:19.72317779 +0000 UTC m=+320.814579320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-8zgnl" (UID: "62e18897-4517-49ff-8a99-6a4447fa6a1e") : secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:18 crc kubenswrapper[4845]: I0202 10:37:18.829255 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-txvzb"] Feb 02 10:37:18 crc kubenswrapper[4845]: W0202 10:37:18.836157 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34701544_b4ad_4596_90fe_5783d7decc81.slice/crio-4eb83a72b26f208c46d5c1090a36d4c461bcce89c009d27668f4c619abd1c85a WatchSource:0}: Error finding container 4eb83a72b26f208c46d5c1090a36d4c461bcce89c009d27668f4c619abd1c85a: Status 404 returned error can't find the container with id 4eb83a72b26f208c46d5c1090a36d4c461bcce89c009d27668f4c619abd1c85a Feb 02 10:37:19 crc kubenswrapper[4845]: I0202 10:37:19.665077 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" event={"ID":"34701544-b4ad-4596-90fe-5783d7decc81","Type":"ContainerStarted","Data":"47e48c94b9474400eb3edd6ec0ae59077835e53f43b12de7e77cec8911252e65"} Feb 02 10:37:19 crc kubenswrapper[4845]: I0202 10:37:19.665398 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" event={"ID":"34701544-b4ad-4596-90fe-5783d7decc81","Type":"ContainerStarted","Data":"4eb83a72b26f208c46d5c1090a36d4c461bcce89c009d27668f4c619abd1c85a"} Feb 02 10:37:19 crc kubenswrapper[4845]: I0202 10:37:19.665442 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:19 crc kubenswrapper[4845]: I0202 10:37:19.734604 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8zgnl\" (UID: \"62e18897-4517-49ff-8a99-6a4447fa6a1e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:19 crc kubenswrapper[4845]: E0202 10:37:19.734807 4845 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:19 crc kubenswrapper[4845]: E0202 10:37:19.734860 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates podName:62e18897-4517-49ff-8a99-6a4447fa6a1e nodeName:}" failed. No retries permitted until 2026-02-02 10:37:21.734845054 +0000 UTC m=+322.826246504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-8zgnl" (UID: "62e18897-4517-49ff-8a99-6a4447fa6a1e") : secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:21 crc kubenswrapper[4845]: I0202 10:37:21.762914 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8zgnl\" (UID: \"62e18897-4517-49ff-8a99-6a4447fa6a1e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:21 crc kubenswrapper[4845]: E0202 10:37:21.763095 4845 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:21 crc kubenswrapper[4845]: E0202 10:37:21.763175 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates podName:62e18897-4517-49ff-8a99-6a4447fa6a1e nodeName:}" failed. No retries permitted until 2026-02-02 10:37:25.763155928 +0000 UTC m=+326.854557378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-8zgnl" (UID: "62e18897-4517-49ff-8a99-6a4447fa6a1e") : secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.590289 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" podStartSLOduration=6.5902637649999996 podStartE2EDuration="6.590263765s" podCreationTimestamp="2026-02-02 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:37:19.684780843 +0000 UTC m=+320.776182353" watchObservedRunningTime="2026-02-02 10:37:24.590263765 +0000 UTC m=+325.681665245" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.595501 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-575p8"] Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.597426 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.602232 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.602933 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-575p8"] Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.701488 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncbcp\" (UniqueName: \"kubernetes.io/projected/5c039981-931c-408f-8185-4d22b3da04a3-kube-api-access-ncbcp\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.701878 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c039981-931c-408f-8185-4d22b3da04a3-utilities\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.701940 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c039981-931c-408f-8185-4d22b3da04a3-catalog-content\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.803515 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c039981-931c-408f-8185-4d22b3da04a3-utilities\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.803582 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c039981-931c-408f-8185-4d22b3da04a3-catalog-content\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.803626 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncbcp\" (UniqueName: \"kubernetes.io/projected/5c039981-931c-408f-8185-4d22b3da04a3-kube-api-access-ncbcp\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.804100 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c039981-931c-408f-8185-4d22b3da04a3-utilities\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.804237 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c039981-931c-408f-8185-4d22b3da04a3-catalog-content\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.825862 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncbcp\" (UniqueName: \"kubernetes.io/projected/5c039981-931c-408f-8185-4d22b3da04a3-kube-api-access-ncbcp\") pod \"redhat-operators-575p8\" (UID: \"5c039981-931c-408f-8185-4d22b3da04a3\") " pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:24 crc kubenswrapper[4845]: I0202 10:37:24.926251 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.374627 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-575p8"] Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.403029 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k66k5"] Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.412109 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.417657 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.426250 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k66k5"] Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.513718 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26334878-6884-4481-b360-96927a5dd3d6-catalog-content\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.513767 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26334878-6884-4481-b360-96927a5dd3d6-utilities\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.513801 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfwhp\" (UniqueName: \"kubernetes.io/projected/26334878-6884-4481-b360-96927a5dd3d6-kube-api-access-jfwhp\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.616042 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26334878-6884-4481-b360-96927a5dd3d6-utilities\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.616159 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfwhp\" (UniqueName: \"kubernetes.io/projected/26334878-6884-4481-b360-96927a5dd3d6-kube-api-access-jfwhp\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.616324 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26334878-6884-4481-b360-96927a5dd3d6-catalog-content\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.616990 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26334878-6884-4481-b360-96927a5dd3d6-utilities\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.617154 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26334878-6884-4481-b360-96927a5dd3d6-catalog-content\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.636868 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfwhp\" (UniqueName: \"kubernetes.io/projected/26334878-6884-4481-b360-96927a5dd3d6-kube-api-access-jfwhp\") pod \"community-operators-k66k5\" (UID: \"26334878-6884-4481-b360-96927a5dd3d6\") " pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.707210 4845 generic.go:334] "Generic (PLEG): container finished" podID="5c039981-931c-408f-8185-4d22b3da04a3" containerID="4185cd9ca57ad1f640fcdd5b51bb9ed68a35a9f3bf1fc83ce8866beaa01a05a7" exitCode=0 Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.707332 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-575p8" event={"ID":"5c039981-931c-408f-8185-4d22b3da04a3","Type":"ContainerDied","Data":"4185cd9ca57ad1f640fcdd5b51bb9ed68a35a9f3bf1fc83ce8866beaa01a05a7"} Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.707430 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-575p8" event={"ID":"5c039981-931c-408f-8185-4d22b3da04a3","Type":"ContainerStarted","Data":"5e07e7e29ad116d41cb4e352652bcd9c0b766e5b86b1f30f73fdd657f741c84a"} Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.773949 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:25 crc kubenswrapper[4845]: I0202 10:37:25.818778 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8zgnl\" (UID: \"62e18897-4517-49ff-8a99-6a4447fa6a1e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:25 crc kubenswrapper[4845]: E0202 10:37:25.818971 4845 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:25 crc kubenswrapper[4845]: E0202 10:37:25.819064 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates podName:62e18897-4517-49ff-8a99-6a4447fa6a1e nodeName:}" failed. No retries permitted until 2026-02-02 10:37:33.819042394 +0000 UTC m=+334.910443844 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-8zgnl" (UID: "62e18897-4517-49ff-8a99-6a4447fa6a1e") : secret "prometheus-operator-admission-webhook-tls" not found Feb 02 10:37:26 crc kubenswrapper[4845]: I0202 10:37:26.213121 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k66k5"] Feb 02 10:37:26 crc kubenswrapper[4845]: I0202 10:37:26.717679 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-575p8" event={"ID":"5c039981-931c-408f-8185-4d22b3da04a3","Type":"ContainerStarted","Data":"8aa89169d748fb501d984356eca507b6756bac12b478bb616287504751e15981"} Feb 02 10:37:26 crc kubenswrapper[4845]: I0202 10:37:26.720317 4845 generic.go:334] "Generic (PLEG): container finished" podID="26334878-6884-4481-b360-96927a5dd3d6" containerID="2eaf2b49b3f03a83d519ea62d5a2cc7e2077959126a0db897a3e0db2685de33b" exitCode=0 Feb 02 10:37:26 crc kubenswrapper[4845]: I0202 10:37:26.720367 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k66k5" event={"ID":"26334878-6884-4481-b360-96927a5dd3d6","Type":"ContainerDied","Data":"2eaf2b49b3f03a83d519ea62d5a2cc7e2077959126a0db897a3e0db2685de33b"} Feb 02 10:37:26 crc kubenswrapper[4845]: I0202 10:37:26.720401 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k66k5" event={"ID":"26334878-6884-4481-b360-96927a5dd3d6","Type":"ContainerStarted","Data":"cd91a19eeb5cd5a216d3ce05005a97dfd2017523c1751040ed87a9014eb73041"} Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.186084 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-skmdg"] Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.187166 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.190594 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.200204 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-skmdg"] Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.338054 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjz5v\" (UniqueName: \"kubernetes.io/projected/7b143223-c383-4b6f-b221-c8908e9f93d9-kube-api-access-pjz5v\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.338112 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b143223-c383-4b6f-b221-c8908e9f93d9-catalog-content\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.338217 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b143223-c383-4b6f-b221-c8908e9f93d9-utilities\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.439760 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b143223-c383-4b6f-b221-c8908e9f93d9-utilities\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.439846 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b143223-c383-4b6f-b221-c8908e9f93d9-catalog-content\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.439880 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjz5v\" (UniqueName: \"kubernetes.io/projected/7b143223-c383-4b6f-b221-c8908e9f93d9-kube-api-access-pjz5v\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.440387 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b143223-c383-4b6f-b221-c8908e9f93d9-catalog-content\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.440412 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b143223-c383-4b6f-b221-c8908e9f93d9-utilities\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.460835 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjz5v\" (UniqueName: \"kubernetes.io/projected/7b143223-c383-4b6f-b221-c8908e9f93d9-kube-api-access-pjz5v\") pod \"certified-operators-skmdg\" (UID: \"7b143223-c383-4b6f-b221-c8908e9f93d9\") " pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.506159 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.727110 4845 generic.go:334] "Generic (PLEG): container finished" podID="5c039981-931c-408f-8185-4d22b3da04a3" containerID="8aa89169d748fb501d984356eca507b6756bac12b478bb616287504751e15981" exitCode=0 Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.727203 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-575p8" event={"ID":"5c039981-931c-408f-8185-4d22b3da04a3","Type":"ContainerDied","Data":"8aa89169d748fb501d984356eca507b6756bac12b478bb616287504751e15981"} Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.730907 4845 generic.go:334] "Generic (PLEG): container finished" podID="26334878-6884-4481-b360-96927a5dd3d6" containerID="289c9bb9ef02865f3a495638881c70ae558363e91b3e5697700d1a08ff36e684" exitCode=0 Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.730951 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k66k5" event={"ID":"26334878-6884-4481-b360-96927a5dd3d6","Type":"ContainerDied","Data":"289c9bb9ef02865f3a495638881c70ae558363e91b3e5697700d1a08ff36e684"} Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.787414 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5vfjf"] Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.791052 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.797071 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.805489 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vfjf"] Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.910705 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-skmdg"] Feb 02 10:37:27 crc kubenswrapper[4845]: W0202 10:37:27.914519 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b143223_c383_4b6f_b221_c8908e9f93d9.slice/crio-20f386c92450c8ff41800329aa63cf123dc0d0d90cfaca1f1fff7a726f5e8cc8 WatchSource:0}: Error finding container 20f386c92450c8ff41800329aa63cf123dc0d0d90cfaca1f1fff7a726f5e8cc8: Status 404 returned error can't find the container with id 20f386c92450c8ff41800329aa63cf123dc0d0d90cfaca1f1fff7a726f5e8cc8 Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.947353 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrr2n\" (UniqueName: \"kubernetes.io/projected/ac22736d-1901-40bf-a17f-186de03c64bf-kube-api-access-qrr2n\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.947402 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac22736d-1901-40bf-a17f-186de03c64bf-utilities\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:27 crc kubenswrapper[4845]: I0202 10:37:27.947422 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac22736d-1901-40bf-a17f-186de03c64bf-catalog-content\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.048644 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrr2n\" (UniqueName: \"kubernetes.io/projected/ac22736d-1901-40bf-a17f-186de03c64bf-kube-api-access-qrr2n\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.048794 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac22736d-1901-40bf-a17f-186de03c64bf-utilities\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.048903 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac22736d-1901-40bf-a17f-186de03c64bf-catalog-content\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.049324 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac22736d-1901-40bf-a17f-186de03c64bf-utilities\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.049384 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac22736d-1901-40bf-a17f-186de03c64bf-catalog-content\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.073776 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrr2n\" (UniqueName: \"kubernetes.io/projected/ac22736d-1901-40bf-a17f-186de03c64bf-kube-api-access-qrr2n\") pod \"redhat-marketplace-5vfjf\" (UID: \"ac22736d-1901-40bf-a17f-186de03c64bf\") " pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.114691 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.560680 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vfjf"] Feb 02 10:37:28 crc kubenswrapper[4845]: W0202 10:37:28.567870 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac22736d_1901_40bf_a17f_186de03c64bf.slice/crio-93d578b2ad3caebaf3deb336250feb91f78935baf82fc1408035f753030f0971 WatchSource:0}: Error finding container 93d578b2ad3caebaf3deb336250feb91f78935baf82fc1408035f753030f0971: Status 404 returned error can't find the container with id 93d578b2ad3caebaf3deb336250feb91f78935baf82fc1408035f753030f0971 Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.740453 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-575p8" event={"ID":"5c039981-931c-408f-8185-4d22b3da04a3","Type":"ContainerStarted","Data":"4f307f275681b3e859122dad338abf9b818cae545e0852b05b8ff57d6e3aec99"} Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.744548 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k66k5" event={"ID":"26334878-6884-4481-b360-96927a5dd3d6","Type":"ContainerStarted","Data":"e19db76e594f3dc096438159a0257cb5b78fd1abe6d7052f0c8beb107062c261"} Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.749552 4845 generic.go:334] "Generic (PLEG): container finished" podID="ac22736d-1901-40bf-a17f-186de03c64bf" containerID="b2bf91b0e8337aec57ec6e01c94edae195c113fb1c77c1805ac5eef6431fde43" exitCode=0 Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.749615 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vfjf" event={"ID":"ac22736d-1901-40bf-a17f-186de03c64bf","Type":"ContainerDied","Data":"b2bf91b0e8337aec57ec6e01c94edae195c113fb1c77c1805ac5eef6431fde43"} Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.749673 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vfjf" event={"ID":"ac22736d-1901-40bf-a17f-186de03c64bf","Type":"ContainerStarted","Data":"93d578b2ad3caebaf3deb336250feb91f78935baf82fc1408035f753030f0971"} Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.752522 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b143223-c383-4b6f-b221-c8908e9f93d9" containerID="c7851ad137271694bd602acbb342eb3239e9008be47f90ac9dd7087bde61e39f" exitCode=0 Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.752558 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skmdg" event={"ID":"7b143223-c383-4b6f-b221-c8908e9f93d9","Type":"ContainerDied","Data":"c7851ad137271694bd602acbb342eb3239e9008be47f90ac9dd7087bde61e39f"} Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.752580 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skmdg" event={"ID":"7b143223-c383-4b6f-b221-c8908e9f93d9","Type":"ContainerStarted","Data":"20f386c92450c8ff41800329aa63cf123dc0d0d90cfaca1f1fff7a726f5e8cc8"} Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.761964 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-575p8" podStartSLOduration=2.145685209 podStartE2EDuration="4.761941943s" podCreationTimestamp="2026-02-02 10:37:24 +0000 UTC" firstStartedPulling="2026-02-02 10:37:25.709163148 +0000 UTC m=+326.800564588" lastFinishedPulling="2026-02-02 10:37:28.325419872 +0000 UTC m=+329.416821322" observedRunningTime="2026-02-02 10:37:28.761380716 +0000 UTC m=+329.852782186" watchObservedRunningTime="2026-02-02 10:37:28.761941943 +0000 UTC m=+329.853343393" Feb 02 10:37:28 crc kubenswrapper[4845]: I0202 10:37:28.801474 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k66k5" podStartSLOduration=2.135580702 podStartE2EDuration="3.801451433s" podCreationTimestamp="2026-02-02 10:37:25 +0000 UTC" firstStartedPulling="2026-02-02 10:37:26.721997608 +0000 UTC m=+327.813399068" lastFinishedPulling="2026-02-02 10:37:28.387868349 +0000 UTC m=+329.479269799" observedRunningTime="2026-02-02 10:37:28.783077835 +0000 UTC m=+329.874479285" watchObservedRunningTime="2026-02-02 10:37:28.801451433 +0000 UTC m=+329.892852883" Feb 02 10:37:29 crc kubenswrapper[4845]: I0202 10:37:29.761033 4845 generic.go:334] "Generic (PLEG): container finished" podID="ac22736d-1901-40bf-a17f-186de03c64bf" containerID="916c950dcfc4493f22b562df8546551e4d1d58c44d349f20d93bba22d20f1596" exitCode=0 Feb 02 10:37:29 crc kubenswrapper[4845]: I0202 10:37:29.761098 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vfjf" event={"ID":"ac22736d-1901-40bf-a17f-186de03c64bf","Type":"ContainerDied","Data":"916c950dcfc4493f22b562df8546551e4d1d58c44d349f20d93bba22d20f1596"} Feb 02 10:37:29 crc kubenswrapper[4845]: I0202 10:37:29.763126 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b143223-c383-4b6f-b221-c8908e9f93d9" containerID="b8b9f5c5d233746e1f26cc29696184f49f1715735404c137e2048c38b1618bd0" exitCode=0 Feb 02 10:37:29 crc kubenswrapper[4845]: I0202 10:37:29.763160 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skmdg" event={"ID":"7b143223-c383-4b6f-b221-c8908e9f93d9","Type":"ContainerDied","Data":"b8b9f5c5d233746e1f26cc29696184f49f1715735404c137e2048c38b1618bd0"} Feb 02 10:37:30 crc kubenswrapper[4845]: I0202 10:37:30.770449 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vfjf" event={"ID":"ac22736d-1901-40bf-a17f-186de03c64bf","Type":"ContainerStarted","Data":"116dbf112e1df83646e0fd988db1e4f727ad8eb134aca7ae54b19ef1ade22ea5"} Feb 02 10:37:30 crc kubenswrapper[4845]: I0202 10:37:30.772677 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skmdg" event={"ID":"7b143223-c383-4b6f-b221-c8908e9f93d9","Type":"ContainerStarted","Data":"1954d970a8ab3f295fb6eee4359d67d483db0625964b75e3c66b4b76a64fe467"} Feb 02 10:37:30 crc kubenswrapper[4845]: I0202 10:37:30.793035 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5vfjf" podStartSLOduration=2.34811045 podStartE2EDuration="3.793015318s" podCreationTimestamp="2026-02-02 10:37:27 +0000 UTC" firstStartedPulling="2026-02-02 10:37:28.751230477 +0000 UTC m=+329.842631927" lastFinishedPulling="2026-02-02 10:37:30.196135345 +0000 UTC m=+331.287536795" observedRunningTime="2026-02-02 10:37:30.78945493 +0000 UTC m=+331.880856380" watchObservedRunningTime="2026-02-02 10:37:30.793015318 +0000 UTC m=+331.884416768" Feb 02 10:37:30 crc kubenswrapper[4845]: I0202 10:37:30.805324 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-skmdg" podStartSLOduration=2.246818254 podStartE2EDuration="3.805301242s" podCreationTimestamp="2026-02-02 10:37:27 +0000 UTC" firstStartedPulling="2026-02-02 10:37:28.755474496 +0000 UTC m=+329.846875946" lastFinishedPulling="2026-02-02 10:37:30.313957484 +0000 UTC m=+331.405358934" observedRunningTime="2026-02-02 10:37:30.804797516 +0000 UTC m=+331.896198966" watchObservedRunningTime="2026-02-02 10:37:30.805301242 +0000 UTC m=+331.896702692" Feb 02 10:37:33 crc kubenswrapper[4845]: I0202 10:37:33.825340 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8zgnl\" (UID: \"62e18897-4517-49ff-8a99-6a4447fa6a1e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:33 crc kubenswrapper[4845]: I0202 10:37:33.839639 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/62e18897-4517-49ff-8a99-6a4447fa6a1e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8zgnl\" (UID: \"62e18897-4517-49ff-8a99-6a4447fa6a1e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:33 crc kubenswrapper[4845]: I0202 10:37:33.984699 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:34 crc kubenswrapper[4845]: I0202 10:37:34.481611 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl"] Feb 02 10:37:34 crc kubenswrapper[4845]: I0202 10:37:34.795856 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" event={"ID":"62e18897-4517-49ff-8a99-6a4447fa6a1e","Type":"ContainerStarted","Data":"cf414cc8499360af2ec8668da8406791c87e4f3dfba5bf93e4ad21e80f2743ff"} Feb 02 10:37:34 crc kubenswrapper[4845]: I0202 10:37:34.927308 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:34 crc kubenswrapper[4845]: I0202 10:37:34.927411 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:34 crc kubenswrapper[4845]: I0202 10:37:34.975750 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:35 crc kubenswrapper[4845]: I0202 10:37:35.775249 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:35 crc kubenswrapper[4845]: I0202 10:37:35.775932 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:35 crc kubenswrapper[4845]: I0202 10:37:35.828623 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:35 crc kubenswrapper[4845]: I0202 10:37:35.855303 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-575p8" Feb 02 10:37:35 crc kubenswrapper[4845]: I0202 10:37:35.907616 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k66k5" Feb 02 10:37:37 crc kubenswrapper[4845]: I0202 10:37:37.507052 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:37 crc kubenswrapper[4845]: I0202 10:37:37.507403 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:37 crc kubenswrapper[4845]: I0202 10:37:37.550401 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:37 crc kubenswrapper[4845]: I0202 10:37:37.821768 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" event={"ID":"62e18897-4517-49ff-8a99-6a4447fa6a1e","Type":"ContainerStarted","Data":"710d9d8207825262c0d5ee981005df7e99702ffd23e5db7ee40bc8d3b0a06dbf"} Feb 02 10:37:37 crc kubenswrapper[4845]: I0202 10:37:37.844975 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" podStartSLOduration=17.679334727 podStartE2EDuration="19.844953399s" podCreationTimestamp="2026-02-02 10:37:18 +0000 UTC" firstStartedPulling="2026-02-02 10:37:34.497201443 +0000 UTC m=+335.588602933" lastFinishedPulling="2026-02-02 10:37:36.662820145 +0000 UTC m=+337.754221605" observedRunningTime="2026-02-02 10:37:37.839665609 +0000 UTC m=+338.931067069" watchObservedRunningTime="2026-02-02 10:37:37.844953399 +0000 UTC m=+338.936354859" Feb 02 10:37:37 crc kubenswrapper[4845]: I0202 10:37:37.865959 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-skmdg" Feb 02 10:37:38 crc kubenswrapper[4845]: I0202 10:37:38.114875 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:38 crc kubenswrapper[4845]: I0202 10:37:38.114978 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:38 crc kubenswrapper[4845]: I0202 10:37:38.164155 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:38 crc kubenswrapper[4845]: I0202 10:37:38.426607 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-txvzb" Feb 02 10:37:38 crc kubenswrapper[4845]: I0202 10:37:38.484270 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-thf72"] Feb 02 10:37:38 crc kubenswrapper[4845]: I0202 10:37:38.828552 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:38 crc kubenswrapper[4845]: I0202 10:37:38.835425 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8zgnl" Feb 02 10:37:38 crc kubenswrapper[4845]: I0202 10:37:38.897739 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5vfjf" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.115729 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-f6hrw"] Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.117442 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.120841 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.120936 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-ww5h7" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.121123 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.121355 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.125418 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-f6hrw"] Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.225466 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b56dea6f-784c-4bd6-b1fe-e34acac80980-metrics-client-ca\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.226051 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b56dea6f-784c-4bd6-b1fe-e34acac80980-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.226113 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b56dea6f-784c-4bd6-b1fe-e34acac80980-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.226137 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wmk6\" (UniqueName: \"kubernetes.io/projected/b56dea6f-784c-4bd6-b1fe-e34acac80980-kube-api-access-9wmk6\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.327567 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b56dea6f-784c-4bd6-b1fe-e34acac80980-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.327657 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b56dea6f-784c-4bd6-b1fe-e34acac80980-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.327682 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wmk6\" (UniqueName: \"kubernetes.io/projected/b56dea6f-784c-4bd6-b1fe-e34acac80980-kube-api-access-9wmk6\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.327718 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b56dea6f-784c-4bd6-b1fe-e34acac80980-metrics-client-ca\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.328743 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b56dea6f-784c-4bd6-b1fe-e34acac80980-metrics-client-ca\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.333352 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b56dea6f-784c-4bd6-b1fe-e34acac80980-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.333922 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b56dea6f-784c-4bd6-b1fe-e34acac80980-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.355061 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wmk6\" (UniqueName: \"kubernetes.io/projected/b56dea6f-784c-4bd6-b1fe-e34acac80980-kube-api-access-9wmk6\") pod \"prometheus-operator-db54df47d-f6hrw\" (UID: \"b56dea6f-784c-4bd6-b1fe-e34acac80980\") " pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.435251 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" Feb 02 10:37:39 crc kubenswrapper[4845]: I0202 10:37:39.937300 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-f6hrw"] Feb 02 10:37:39 crc kubenswrapper[4845]: W0202 10:37:39.950939 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb56dea6f_784c_4bd6_b1fe_e34acac80980.slice/crio-fc8b71d598095f83c64f5effd4e64ff571a57aeedde349ddb6252a94f8e9f3a5 WatchSource:0}: Error finding container fc8b71d598095f83c64f5effd4e64ff571a57aeedde349ddb6252a94f8e9f3a5: Status 404 returned error can't find the container with id fc8b71d598095f83c64f5effd4e64ff571a57aeedde349ddb6252a94f8e9f3a5 Feb 02 10:37:40 crc kubenswrapper[4845]: I0202 10:37:40.839994 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" event={"ID":"b56dea6f-784c-4bd6-b1fe-e34acac80980","Type":"ContainerStarted","Data":"fc8b71d598095f83c64f5effd4e64ff571a57aeedde349ddb6252a94f8e9f3a5"} Feb 02 10:37:42 crc kubenswrapper[4845]: I0202 10:37:42.854972 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" event={"ID":"b56dea6f-784c-4bd6-b1fe-e34acac80980","Type":"ContainerStarted","Data":"42cc51f8184b2e91c8f117c40ac29dd0c0fad588bc744b945da4b9fa2d828ca9"} Feb 02 10:37:42 crc kubenswrapper[4845]: I0202 10:37:42.856114 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" event={"ID":"b56dea6f-784c-4bd6-b1fe-e34acac80980","Type":"ContainerStarted","Data":"8d009f466c90bcdad40b05d268720d63ada3d8fccaf4579a8b498f28e3040ced"} Feb 02 10:37:42 crc kubenswrapper[4845]: I0202 10:37:42.879414 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-f6hrw" podStartSLOduration=1.75905793 podStartE2EDuration="3.879387477s" podCreationTimestamp="2026-02-02 10:37:39 +0000 UTC" firstStartedPulling="2026-02-02 10:37:39.953291361 +0000 UTC m=+341.044692811" lastFinishedPulling="2026-02-02 10:37:42.073620908 +0000 UTC m=+343.165022358" observedRunningTime="2026-02-02 10:37:42.876962443 +0000 UTC m=+343.968363923" watchObservedRunningTime="2026-02-02 10:37:42.879387477 +0000 UTC m=+343.970788947" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.479306 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-x6klx"] Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.480950 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.483298 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.483503 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-hl897" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.483634 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.509902 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69"] Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.511719 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.513354 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.513602 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.516517 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.516760 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-lsl2f" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.545418 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-x6klx"] Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.553102 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69"] Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.575769 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-jx8bv"] Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.577202 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.579937 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.584192 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gddws" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.594194 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601012 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f5148935-61d4-4a95-9c57-7f1ee944dbfb-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601055 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5148935-61d4-4a95-9c57-7f1ee944dbfb-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601078 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcz2b\" (UniqueName: \"kubernetes.io/projected/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-api-access-bcz2b\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601102 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601126 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601151 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601223 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhf7b\" (UniqueName: \"kubernetes.io/projected/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-kube-api-access-bhf7b\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601257 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601292 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.601342 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702247 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702298 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702336 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14e66b69-f723-4777-be6f-a522117a2b5b-metrics-client-ca\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702368 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-tls\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702407 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-wtmp\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702430 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702454 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-textfile\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702488 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-root\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702511 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f5148935-61d4-4a95-9c57-7f1ee944dbfb-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702532 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5148935-61d4-4a95-9c57-7f1ee944dbfb-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702556 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcz2b\" (UniqueName: \"kubernetes.io/projected/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-api-access-bcz2b\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702580 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdsv\" (UniqueName: \"kubernetes.io/projected/14e66b69-f723-4777-be6f-a522117a2b5b-kube-api-access-xhdsv\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702611 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702635 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702659 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702689 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702715 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-sys\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.702740 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhf7b\" (UniqueName: \"kubernetes.io/projected/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-kube-api-access-bhf7b\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: E0202 10:37:44.703250 4845 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 02 10:37:44 crc kubenswrapper[4845]: E0202 10:37:44.704231 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-tls podName:f5148935-61d4-4a95-9c57-7f1ee944dbfb nodeName:}" failed. No retries permitted until 2026-02-02 10:37:45.204208566 +0000 UTC m=+346.295610016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-sdb69" (UID: "f5148935-61d4-4a95-9c57-7f1ee944dbfb") : secret "kube-state-metrics-tls" not found Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.703805 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f5148935-61d4-4a95-9c57-7f1ee944dbfb-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.703387 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.704360 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.704670 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5148935-61d4-4a95-9c57-7f1ee944dbfb-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.710146 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.712430 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.727632 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhf7b\" (UniqueName: \"kubernetes.io/projected/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-kube-api-access-bhf7b\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.731394 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2ba7cee-a22a-4b49-8871-6e54a93e6ebd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-x6klx\" (UID: \"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.732563 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcz2b\" (UniqueName: \"kubernetes.io/projected/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-api-access-bcz2b\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.804325 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-wtmp\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.804371 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-textfile\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.804399 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-root\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.804429 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhdsv\" (UniqueName: \"kubernetes.io/projected/14e66b69-f723-4777-be6f-a522117a2b5b-kube-api-access-xhdsv\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.804453 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.804487 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-sys\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.804536 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14e66b69-f723-4777-be6f-a522117a2b5b-metrics-client-ca\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.804561 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-tls\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.807197 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-tls\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.807417 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-wtmp\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.807645 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-textfile\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.807678 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-root\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.823470 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/14e66b69-f723-4777-be6f-a522117a2b5b-sys\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.823995 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14e66b69-f723-4777-be6f-a522117a2b5b-metrics-client-ca\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.828347 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14e66b69-f723-4777-be6f-a522117a2b5b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.839244 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.842602 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhdsv\" (UniqueName: \"kubernetes.io/projected/14e66b69-f723-4777-be6f-a522117a2b5b-kube-api-access-xhdsv\") pod \"node-exporter-jx8bv\" (UID: \"14e66b69-f723-4777-be6f-a522117a2b5b\") " pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:44 crc kubenswrapper[4845]: I0202 10:37:44.895180 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-jx8bv" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.214246 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.220033 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f5148935-61d4-4a95-9c57-7f1ee944dbfb-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-sdb69\" (UID: \"f5148935-61d4-4a95-9c57-7f1ee944dbfb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.301370 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-x6klx"] Feb 02 10:37:45 crc kubenswrapper[4845]: W0202 10:37:45.307001 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2ba7cee_a22a_4b49_8871_6e54a93e6ebd.slice/crio-df4b604ce3a5e4429045bef8d6eb900430c8197fe4d05c711d94cf0c049c6280 WatchSource:0}: Error finding container df4b604ce3a5e4429045bef8d6eb900430c8197fe4d05c711d94cf0c049c6280: Status 404 returned error can't find the container with id df4b604ce3a5e4429045bef8d6eb900430c8197fe4d05c711d94cf0c049c6280 Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.444945 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.640915 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.643985 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.651285 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.651448 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.651516 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.651734 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.651931 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.651993 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.652082 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-kfgx6" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.652097 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.659535 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.669780 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.703274 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69"] Feb 02 10:37:45 crc kubenswrapper[4845]: W0202 10:37:45.706387 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5148935_61d4_4a95_9c57_7f1ee944dbfb.slice/crio-1eb58e24425a7988f47e69ac7271f1377e407e0daa2215f9631f516dc76b0505 WatchSource:0}: Error finding container 1eb58e24425a7988f47e69ac7271f1377e407e0daa2215f9631f516dc76b0505: Status 404 returned error can't find the container with id 1eb58e24425a7988f47e69ac7271f1377e407e0daa2215f9631f516dc76b0505 Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848300 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-config-volume\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848385 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-web-config\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848427 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848445 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7d75d05-4636-4623-b561-1b2e713ac513-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848552 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848601 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/f7d75d05-4636-4623-b561-1b2e713ac513-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848728 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848835 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f7d75d05-4636-4623-b561-1b2e713ac513-config-out\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848866 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848937 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f7d75d05-4636-4623-b561-1b2e713ac513-tls-assets\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848967 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d75d05-4636-4623-b561-1b2e713ac513-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.848990 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9dx\" (UniqueName: \"kubernetes.io/projected/f7d75d05-4636-4623-b561-1b2e713ac513-kube-api-access-td9dx\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.873575 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" event={"ID":"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd","Type":"ContainerStarted","Data":"e16a9c5415275eb9256529c450eb29d6043c882384257d2697352f5dd6fc62d3"} Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.873620 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" event={"ID":"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd","Type":"ContainerStarted","Data":"df4b604ce3a5e4429045bef8d6eb900430c8197fe4d05c711d94cf0c049c6280"} Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.875310 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jx8bv" event={"ID":"14e66b69-f723-4777-be6f-a522117a2b5b","Type":"ContainerStarted","Data":"d8fd4544243bb752cf9fa52679e54e945c840c5bd5edd59957f2cfabc4fbdac4"} Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.876274 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" event={"ID":"f5148935-61d4-4a95-9c57-7f1ee944dbfb","Type":"ContainerStarted","Data":"1eb58e24425a7988f47e69ac7271f1377e407e0daa2215f9631f516dc76b0505"} Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951185 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f7d75d05-4636-4623-b561-1b2e713ac513-config-out\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951249 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951291 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f7d75d05-4636-4623-b561-1b2e713ac513-tls-assets\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951315 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d75d05-4636-4623-b561-1b2e713ac513-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951344 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9dx\" (UniqueName: \"kubernetes.io/projected/f7d75d05-4636-4623-b561-1b2e713ac513-kube-api-access-td9dx\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951372 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-config-volume\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951407 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-web-config\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951444 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951464 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7d75d05-4636-4623-b561-1b2e713ac513-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951505 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951530 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/f7d75d05-4636-4623-b561-1b2e713ac513-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.951568 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: E0202 10:37:45.951693 4845 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 02 10:37:45 crc kubenswrapper[4845]: E0202 10:37:45.951751 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls podName:f7d75d05-4636-4623-b561-1b2e713ac513 nodeName:}" failed. No retries permitted until 2026-02-02 10:37:46.451731396 +0000 UTC m=+347.543132846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "f7d75d05-4636-4623-b561-1b2e713ac513") : secret "alertmanager-main-tls" not found Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.953025 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/f7d75d05-4636-4623-b561-1b2e713ac513-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.953323 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7d75d05-4636-4623-b561-1b2e713ac513-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.953541 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d75d05-4636-4623-b561-1b2e713ac513-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.959170 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-config-volume\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.962296 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.964082 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.964817 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.964919 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-web-config\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.965665 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f7d75d05-4636-4623-b561-1b2e713ac513-config-out\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.966670 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f7d75d05-4636-4623-b561-1b2e713ac513-tls-assets\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:45 crc kubenswrapper[4845]: I0202 10:37:45.972013 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9dx\" (UniqueName: \"kubernetes.io/projected/f7d75d05-4636-4623-b561-1b2e713ac513-kube-api-access-td9dx\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.237755 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.237811 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:37:46 crc kubenswrapper[4845]: E0202 10:37:46.459790 4845 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.459968 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:46 crc kubenswrapper[4845]: E0202 10:37:46.460084 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls podName:f7d75d05-4636-4623-b561-1b2e713ac513 nodeName:}" failed. No retries permitted until 2026-02-02 10:37:47.460065279 +0000 UTC m=+348.551466729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "f7d75d05-4636-4623-b561-1b2e713ac513") : secret "alertmanager-main-tls" not found Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.533561 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-f47c49b7c-j9wsh"] Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.535132 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.544077 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.544122 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.545024 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-2hz75" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.545381 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.546139 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9f1hp0m1quk94" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.550421 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-f47c49b7c-j9wsh"] Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.552113 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.556107 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.664696 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.666366 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.666550 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.666869 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-grpc-tls\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.667026 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.667135 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/07221aaf-e31a-42a4-8033-ba7ad6d21564-metrics-client-ca\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.667298 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-tls\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.667439 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvwx\" (UniqueName: \"kubernetes.io/projected/07221aaf-e31a-42a4-8033-ba7ad6d21564-kube-api-access-5bvwx\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.768595 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-grpc-tls\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.768990 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.769198 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/07221aaf-e31a-42a4-8033-ba7ad6d21564-metrics-client-ca\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.769346 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-tls\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.769478 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvwx\" (UniqueName: \"kubernetes.io/projected/07221aaf-e31a-42a4-8033-ba7ad6d21564-kube-api-access-5bvwx\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.769633 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.769788 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.769976 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.770665 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/07221aaf-e31a-42a4-8033-ba7ad6d21564-metrics-client-ca\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.774059 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.775198 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.776083 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-grpc-tls\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.776313 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.778006 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-tls\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.781930 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/07221aaf-e31a-42a4-8033-ba7ad6d21564-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.787364 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvwx\" (UniqueName: \"kubernetes.io/projected/07221aaf-e31a-42a4-8033-ba7ad6d21564-kube-api-access-5bvwx\") pod \"thanos-querier-f47c49b7c-j9wsh\" (UID: \"07221aaf-e31a-42a4-8033-ba7ad6d21564\") " pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.855291 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:46 crc kubenswrapper[4845]: I0202 10:37:46.885748 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" event={"ID":"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd","Type":"ContainerStarted","Data":"a8a2760978ab69931349876782201184bee731dfc6b3210a347e2ff846996665"} Feb 02 10:37:47 crc kubenswrapper[4845]: I0202 10:37:47.415976 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-f47c49b7c-j9wsh"] Feb 02 10:37:47 crc kubenswrapper[4845]: W0202 10:37:47.447089 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07221aaf_e31a_42a4_8033_ba7ad6d21564.slice/crio-bfed3d4492f47ecc8dc70e58356f7633806204028ffc251608fa59eceb6e08ca WatchSource:0}: Error finding container bfed3d4492f47ecc8dc70e58356f7633806204028ffc251608fa59eceb6e08ca: Status 404 returned error can't find the container with id bfed3d4492f47ecc8dc70e58356f7633806204028ffc251608fa59eceb6e08ca Feb 02 10:37:47 crc kubenswrapper[4845]: I0202 10:37:47.479692 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:47 crc kubenswrapper[4845]: I0202 10:37:47.485230 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/f7d75d05-4636-4623-b561-1b2e713ac513-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"f7d75d05-4636-4623-b561-1b2e713ac513\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:47 crc kubenswrapper[4845]: I0202 10:37:47.786390 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 02 10:37:47 crc kubenswrapper[4845]: I0202 10:37:47.891956 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" event={"ID":"07221aaf-e31a-42a4-8033-ba7ad6d21564","Type":"ContainerStarted","Data":"bfed3d4492f47ecc8dc70e58356f7633806204028ffc251608fa59eceb6e08ca"} Feb 02 10:37:47 crc kubenswrapper[4845]: I0202 10:37:47.893724 4845 generic.go:334] "Generic (PLEG): container finished" podID="14e66b69-f723-4777-be6f-a522117a2b5b" containerID="5435b2531b4062c1a8e8e48efb799c8ef7ce24967ec1ed9d77c1825230e5377a" exitCode=0 Feb 02 10:37:47 crc kubenswrapper[4845]: I0202 10:37:47.893754 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jx8bv" event={"ID":"14e66b69-f723-4777-be6f-a522117a2b5b","Type":"ContainerDied","Data":"5435b2531b4062c1a8e8e48efb799c8ef7ce24967ec1ed9d77c1825230e5377a"} Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:48.905376 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" event={"ID":"d2ba7cee-a22a-4b49-8871-6e54a93e6ebd","Type":"ContainerStarted","Data":"ec2380ebbd79f655eb51ee2a934fe2f867e4b5abc36822ffd0ac45fcd508f7b0"} Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:48.915647 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jx8bv" event={"ID":"14e66b69-f723-4777-be6f-a522117a2b5b","Type":"ContainerStarted","Data":"a9e6442ab44737a0bb960a0fb9b2b793c2f9525ccfb2c481b9f51c7108f6bf78"} Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:48.915689 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jx8bv" event={"ID":"14e66b69-f723-4777-be6f-a522117a2b5b","Type":"ContainerStarted","Data":"f3cf301607078b11c59a124209db136776e9477d316d4651c1f1262e2884a007"} Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:48.925947 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" event={"ID":"f5148935-61d4-4a95-9c57-7f1ee944dbfb","Type":"ContainerStarted","Data":"fc43efbe8f84ad54806474e5b36014659a62e37ca077068a64698aa50a600b44"} Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:48.926025 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" event={"ID":"f5148935-61d4-4a95-9c57-7f1ee944dbfb","Type":"ContainerStarted","Data":"adc5a4d1188e3599e3e8115cfd681f96ff5178b4549e2418f7a9db182776df15"} Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:48.932021 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-x6klx" podStartSLOduration=2.485963937 podStartE2EDuration="4.932004218s" podCreationTimestamp="2026-02-02 10:37:44 +0000 UTC" firstStartedPulling="2026-02-02 10:37:45.950734246 +0000 UTC m=+347.042135696" lastFinishedPulling="2026-02-02 10:37:48.396774537 +0000 UTC m=+349.488175977" observedRunningTime="2026-02-02 10:37:48.929015507 +0000 UTC m=+350.020416957" watchObservedRunningTime="2026-02-02 10:37:48.932004218 +0000 UTC m=+350.023405668" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.258739 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-jx8bv" podStartSLOduration=3.324197153 podStartE2EDuration="5.258719684s" podCreationTimestamp="2026-02-02 10:37:44 +0000 UTC" firstStartedPulling="2026-02-02 10:37:44.931527352 +0000 UTC m=+346.022928802" lastFinishedPulling="2026-02-02 10:37:46.866049883 +0000 UTC m=+347.957451333" observedRunningTime="2026-02-02 10:37:48.958622727 +0000 UTC m=+350.050024217" watchObservedRunningTime="2026-02-02 10:37:49.258719684 +0000 UTC m=+350.350121134" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.261639 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-986c9fb64-5l8tt"] Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.262446 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.280404 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-986c9fb64-5l8tt"] Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.409621 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-service-ca\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.409677 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-console-config\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.409727 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-oauth-serving-cert\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.409751 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhj26\" (UniqueName: \"kubernetes.io/projected/615f8561-b519-43b6-8864-9b1275443e98-kube-api-access-mhj26\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.409813 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-serving-cert\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.409871 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-oauth-config\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.409936 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-trusted-ca-bundle\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.511798 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-serving-cert\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.511845 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-oauth-config\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.511902 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-trusted-ca-bundle\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.511956 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-service-ca\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.511975 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-console-config\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.512006 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-oauth-serving-cert\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.512021 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhj26\" (UniqueName: \"kubernetes.io/projected/615f8561-b519-43b6-8864-9b1275443e98-kube-api-access-mhj26\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.513089 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-console-config\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.513282 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-trusted-ca-bundle\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.513294 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-service-ca\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.513384 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-oauth-serving-cert\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.517513 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-oauth-config\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.517926 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-serving-cert\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.530599 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhj26\" (UniqueName: \"kubernetes.io/projected/615f8561-b519-43b6-8864-9b1275443e98-kube-api-access-mhj26\") pod \"console-986c9fb64-5l8tt\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.578323 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.634325 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 02 10:37:49 crc kubenswrapper[4845]: W0202 10:37:49.643360 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7d75d05_4636_4623_b561_1b2e713ac513.slice/crio-bcf7fba67453e8f1dab59e9940c4fa120757e4fe8e9f7b2607bbe33225d17342 WatchSource:0}: Error finding container bcf7fba67453e8f1dab59e9940c4fa120757e4fe8e9f7b2607bbe33225d17342: Status 404 returned error can't find the container with id bcf7fba67453e8f1dab59e9940c4fa120757e4fe8e9f7b2607bbe33225d17342 Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.926664 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-76d65679c8-9dw7p"] Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.928027 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.932011 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.932271 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.932507 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.933028 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.933151 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-f978s" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.933308 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-cjt1cflo97mfk" Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.939972 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" event={"ID":"f5148935-61d4-4a95-9c57-7f1ee944dbfb","Type":"ContainerStarted","Data":"95dcf38fcdf75b06770cc60660242daaefe50de763bda19388b7f3d4c05fe105"} Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.944536 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f7d75d05-4636-4623-b561-1b2e713ac513","Type":"ContainerStarted","Data":"bcf7fba67453e8f1dab59e9940c4fa120757e4fe8e9f7b2607bbe33225d17342"} Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.952043 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76d65679c8-9dw7p"] Feb 02 10:37:49 crc kubenswrapper[4845]: I0202 10:37:49.977364 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-sdb69" podStartSLOduration=3.294567322 podStartE2EDuration="5.977347885s" podCreationTimestamp="2026-02-02 10:37:44 +0000 UTC" firstStartedPulling="2026-02-02 10:37:45.709185578 +0000 UTC m=+346.800587038" lastFinishedPulling="2026-02-02 10:37:48.391966151 +0000 UTC m=+349.483367601" observedRunningTime="2026-02-02 10:37:49.975605873 +0000 UTC m=+351.067007343" watchObservedRunningTime="2026-02-02 10:37:49.977347885 +0000 UTC m=+351.068749335" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:49.999208 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-986c9fb64-5l8tt"] Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.132958 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-client-certs\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.133024 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.133048 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-server-tls\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.133092 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-client-ca-bundle\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.133122 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfgfq\" (UniqueName: \"kubernetes.io/projected/ec05463b-fba2-442c-9ba3-893de7b61f92-kube-api-access-zfgfq\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.133148 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-metrics-server-audit-profiles\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.133407 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ec05463b-fba2-442c-9ba3-893de7b61f92-audit-log\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.234628 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-client-certs\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.234692 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.234721 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-server-tls\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.234767 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-client-ca-bundle\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.234795 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfgfq\" (UniqueName: \"kubernetes.io/projected/ec05463b-fba2-442c-9ba3-893de7b61f92-kube-api-access-zfgfq\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.234829 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-metrics-server-audit-profiles\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.234915 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ec05463b-fba2-442c-9ba3-893de7b61f92-audit-log\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.235848 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.235941 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ec05463b-fba2-442c-9ba3-893de7b61f92-audit-log\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.236638 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-metrics-server-audit-profiles\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.247724 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-client-certs\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.258573 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-server-tls\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.259514 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm"] Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.260375 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.261856 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-client-ca-bundle\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.264762 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.265992 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.266763 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfgfq\" (UniqueName: \"kubernetes.io/projected/ec05463b-fba2-442c-9ba3-893de7b61f92-kube-api-access-zfgfq\") pod \"metrics-server-76d65679c8-9dw7p\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.270099 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm"] Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.438317 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f5753406-2b60-4929-85b3-8e01c37218b3-monitoring-plugin-cert\") pod \"monitoring-plugin-db4cbb94b-xntgm\" (UID: \"f5753406-2b60-4929-85b3-8e01c37218b3\") " pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.539459 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f5753406-2b60-4929-85b3-8e01c37218b3-monitoring-plugin-cert\") pod \"monitoring-plugin-db4cbb94b-xntgm\" (UID: \"f5753406-2b60-4929-85b3-8e01c37218b3\") " pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.546296 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.547193 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f5753406-2b60-4929-85b3-8e01c37218b3-monitoring-plugin-cert\") pod \"monitoring-plugin-db4cbb94b-xntgm\" (UID: \"f5753406-2b60-4929-85b3-8e01c37218b3\") " pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.615269 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.916870 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.921020 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.923228 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.923384 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.923551 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.930420 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.930792 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7t4uaoa6dn2sc" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.931407 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.931559 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.931641 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.931686 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-6pqjf" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.931744 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.932051 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.935752 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.940328 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.943939 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.970518 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.970590 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-web-config\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.970637 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-config\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.970668 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.970730 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.970853 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.970933 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.970989 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.971012 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645tt\" (UniqueName: \"kubernetes.io/projected/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-kube-api-access-645tt\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.971051 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.971091 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-config-out\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.971224 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.972368 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.972405 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.973082 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.982631 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.982916 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.983002 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.988426 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-986c9fb64-5l8tt" event={"ID":"615f8561-b519-43b6-8864-9b1275443e98","Type":"ContainerStarted","Data":"9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38"} Feb 02 10:37:50 crc kubenswrapper[4845]: I0202 10:37:50.988474 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-986c9fb64-5l8tt" event={"ID":"615f8561-b519-43b6-8864-9b1275443e98","Type":"ContainerStarted","Data":"3fbb867d7ca80df191b9729b7982330f959b62e993162b6e2d6a5e4f58aec846"} Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.007536 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-986c9fb64-5l8tt" podStartSLOduration=2.007520252 podStartE2EDuration="2.007520252s" podCreationTimestamp="2026-02-02 10:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:37:51.001945563 +0000 UTC m=+352.093347013" watchObservedRunningTime="2026-02-02 10:37:51.007520252 +0000 UTC m=+352.098921702" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.084846 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.084937 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085027 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085072 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085091 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-web-config\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085119 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-config\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085147 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085164 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085197 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085212 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085238 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085255 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-645tt\" (UniqueName: \"kubernetes.io/projected/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-kube-api-access-645tt\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085275 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085295 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-config-out\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085335 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085389 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085421 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.085455 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.086060 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.086666 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.086993 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.100624 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-web-config\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.103556 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.103828 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-645tt\" (UniqueName: \"kubernetes.io/projected/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-kube-api-access-645tt\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.103852 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.104763 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.104904 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.104953 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-config\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.109498 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.110564 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-config-out\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.110620 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.110728 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.111199 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.111341 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.111549 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.111663 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/92c6b031-d11f-4f89-84e5-bbbe36ea3bba-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"92c6b031-d11f-4f89-84e5-bbbe36ea3bba\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.282666 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:37:51 crc kubenswrapper[4845]: I0202 10:37:51.873708 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm"] Feb 02 10:37:52 crc kubenswrapper[4845]: I0202 10:37:52.001503 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" event={"ID":"07221aaf-e31a-42a4-8033-ba7ad6d21564","Type":"ContainerStarted","Data":"682641897f869349de46992bb4ffa28e8d32740f2b7661ad69a3de1f754a28e0"} Feb 02 10:37:52 crc kubenswrapper[4845]: I0202 10:37:52.013471 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" event={"ID":"f5753406-2b60-4929-85b3-8e01c37218b3","Type":"ContainerStarted","Data":"7967ac4b496b366ce99f5e32cc502e0dcf108c64a09981d769ca1e0919a03dd6"} Feb 02 10:37:52 crc kubenswrapper[4845]: I0202 10:37:52.183053 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76d65679c8-9dw7p"] Feb 02 10:37:52 crc kubenswrapper[4845]: I0202 10:37:52.296147 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 02 10:37:52 crc kubenswrapper[4845]: W0202 10:37:52.311383 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92c6b031_d11f_4f89_84e5_bbbe36ea3bba.slice/crio-adbc1ecb282316c17d31de4343b73f6caf2a0ba2e99fae9631669fba198c3172 WatchSource:0}: Error finding container adbc1ecb282316c17d31de4343b73f6caf2a0ba2e99fae9631669fba198c3172: Status 404 returned error can't find the container with id adbc1ecb282316c17d31de4343b73f6caf2a0ba2e99fae9631669fba198c3172 Feb 02 10:37:53 crc kubenswrapper[4845]: I0202 10:37:53.020029 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" event={"ID":"ec05463b-fba2-442c-9ba3-893de7b61f92","Type":"ContainerStarted","Data":"9c81f12ccf75970d5f3c58822634af178231901d9efb8c97f05d144b2887fc22"} Feb 02 10:37:53 crc kubenswrapper[4845]: I0202 10:37:53.022796 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" event={"ID":"07221aaf-e31a-42a4-8033-ba7ad6d21564","Type":"ContainerStarted","Data":"265f52b96dae2af9ca161199641671dcd0244053ad41833a34413da5fcd7546c"} Feb 02 10:37:53 crc kubenswrapper[4845]: I0202 10:37:53.022826 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" event={"ID":"07221aaf-e31a-42a4-8033-ba7ad6d21564","Type":"ContainerStarted","Data":"b668e8441ad52954d88be68329fb6d82709073834dd9c8df2d5d66d5b1cf2377"} Feb 02 10:37:53 crc kubenswrapper[4845]: I0202 10:37:53.024061 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"92c6b031-d11f-4f89-84e5-bbbe36ea3bba","Type":"ContainerStarted","Data":"adbc1ecb282316c17d31de4343b73f6caf2a0ba2e99fae9631669fba198c3172"} Feb 02 10:37:54 crc kubenswrapper[4845]: I0202 10:37:54.035695 4845 generic.go:334] "Generic (PLEG): container finished" podID="f7d75d05-4636-4623-b561-1b2e713ac513" containerID="3af08772545699ef7b9596cdd64da778a35bb868396259805a90b121052198a6" exitCode=0 Feb 02 10:37:54 crc kubenswrapper[4845]: I0202 10:37:54.036077 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f7d75d05-4636-4623-b561-1b2e713ac513","Type":"ContainerDied","Data":"3af08772545699ef7b9596cdd64da778a35bb868396259805a90b121052198a6"} Feb 02 10:37:54 crc kubenswrapper[4845]: I0202 10:37:54.041961 4845 generic.go:334] "Generic (PLEG): container finished" podID="92c6b031-d11f-4f89-84e5-bbbe36ea3bba" containerID="eaed1c05688f22ab8cc4f55eefa5c9f3f8e492a46e0715c07adb13c6a8ab0aa7" exitCode=0 Feb 02 10:37:54 crc kubenswrapper[4845]: I0202 10:37:54.041993 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"92c6b031-d11f-4f89-84e5-bbbe36ea3bba","Type":"ContainerDied","Data":"eaed1c05688f22ab8cc4f55eefa5c9f3f8e492a46e0715c07adb13c6a8ab0aa7"} Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.067027 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" event={"ID":"07221aaf-e31a-42a4-8033-ba7ad6d21564","Type":"ContainerStarted","Data":"29ebd4091191dd5397a8e1d783aefa4c8e4c8d5cd67293e6735c3bfbf5d39864"} Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.067664 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" event={"ID":"07221aaf-e31a-42a4-8033-ba7ad6d21564","Type":"ContainerStarted","Data":"fd65c200971d2cb072efd12f4a9152d2ab17f8031c1ebee0923a018bfa090ffb"} Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.067768 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" event={"ID":"07221aaf-e31a-42a4-8033-ba7ad6d21564","Type":"ContainerStarted","Data":"8e0e74b1568991705ba743e41fce420885a125f511789138985579e8ddfbbf03"} Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.070920 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.083829 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.088342 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" event={"ID":"f5753406-2b60-4929-85b3-8e01c37218b3","Type":"ContainerStarted","Data":"d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4"} Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.088868 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.093998 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" event={"ID":"ec05463b-fba2-442c-9ba3-893de7b61f92","Type":"ContainerStarted","Data":"a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b"} Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.101858 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.115103 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-f47c49b7c-j9wsh" podStartSLOduration=2.484939295 podStartE2EDuration="11.115084423s" podCreationTimestamp="2026-02-02 10:37:46 +0000 UTC" firstStartedPulling="2026-02-02 10:37:47.450314993 +0000 UTC m=+348.541716443" lastFinishedPulling="2026-02-02 10:37:56.080460121 +0000 UTC m=+357.171861571" observedRunningTime="2026-02-02 10:37:57.100281923 +0000 UTC m=+358.191683373" watchObservedRunningTime="2026-02-02 10:37:57.115084423 +0000 UTC m=+358.206485873" Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.121700 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" podStartSLOduration=4.228803675 podStartE2EDuration="8.121681843s" podCreationTimestamp="2026-02-02 10:37:49 +0000 UTC" firstStartedPulling="2026-02-02 10:37:52.193953327 +0000 UTC m=+353.285354777" lastFinishedPulling="2026-02-02 10:37:56.086831495 +0000 UTC m=+357.178232945" observedRunningTime="2026-02-02 10:37:57.120051114 +0000 UTC m=+358.211452574" watchObservedRunningTime="2026-02-02 10:37:57.121681843 +0000 UTC m=+358.213083293" Feb 02 10:37:57 crc kubenswrapper[4845]: I0202 10:37:57.170438 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" podStartSLOduration=3.008871634 podStartE2EDuration="7.170257279s" podCreationTimestamp="2026-02-02 10:37:50 +0000 UTC" firstStartedPulling="2026-02-02 10:37:51.918970823 +0000 UTC m=+353.010372273" lastFinishedPulling="2026-02-02 10:37:56.080356468 +0000 UTC m=+357.171757918" observedRunningTime="2026-02-02 10:37:57.164153684 +0000 UTC m=+358.255555134" watchObservedRunningTime="2026-02-02 10:37:57.170257279 +0000 UTC m=+358.261658749" Feb 02 10:37:59 crc kubenswrapper[4845]: I0202 10:37:59.118585 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f7d75d05-4636-4623-b561-1b2e713ac513","Type":"ContainerStarted","Data":"0b439b7c2d0e62e945e5d30cfadcd81c6a64716614daa9633b1b986427581639"} Feb 02 10:37:59 crc kubenswrapper[4845]: I0202 10:37:59.578849 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:59 crc kubenswrapper[4845]: I0202 10:37:59.579501 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:37:59 crc kubenswrapper[4845]: I0202 10:37:59.586171 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:38:00 crc kubenswrapper[4845]: I0202 10:38:00.132568 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f7d75d05-4636-4623-b561-1b2e713ac513","Type":"ContainerStarted","Data":"33eed5e5eabd757463b54e64495087f70ab77a391ebfacdef6d343ec4c9c0f68"} Feb 02 10:38:00 crc kubenswrapper[4845]: I0202 10:38:00.132869 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f7d75d05-4636-4623-b561-1b2e713ac513","Type":"ContainerStarted","Data":"059d13b6797eb9ec594412fac039c0ccec1379fe9b99eb85abd2e9fd8ca21925"} Feb 02 10:38:00 crc kubenswrapper[4845]: I0202 10:38:00.132905 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f7d75d05-4636-4623-b561-1b2e713ac513","Type":"ContainerStarted","Data":"128f9ad9dd13bf8b1b33334c3bc7c4418d334f8dd5688208e95cb87f6f725961"} Feb 02 10:38:00 crc kubenswrapper[4845]: I0202 10:38:00.132915 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f7d75d05-4636-4623-b561-1b2e713ac513","Type":"ContainerStarted","Data":"83cf0123e5b040f9cf412941034f94b55d62908644c685237edafad816fa3358"} Feb 02 10:38:00 crc kubenswrapper[4845]: I0202 10:38:00.144210 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:38:00 crc kubenswrapper[4845]: I0202 10:38:00.209475 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8gjpm"] Feb 02 10:38:03 crc kubenswrapper[4845]: I0202 10:38:03.522801 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" podUID="339fe372-b3de-4832-b32f-0218d2c0545b" containerName="registry" containerID="cri-o://c850483e123504ea082a0c8f17db4867ee8a686d50fc85724804f4fd70d8bc85" gracePeriod=30 Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.157064 4845 generic.go:334] "Generic (PLEG): container finished" podID="339fe372-b3de-4832-b32f-0218d2c0545b" containerID="c850483e123504ea082a0c8f17db4867ee8a686d50fc85724804f4fd70d8bc85" exitCode=0 Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.157181 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" event={"ID":"339fe372-b3de-4832-b32f-0218d2c0545b","Type":"ContainerDied","Data":"c850483e123504ea082a0c8f17db4867ee8a686d50fc85724804f4fd70d8bc85"} Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.419179 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.539381 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g77qx\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-kube-api-access-g77qx\") pod \"339fe372-b3de-4832-b32f-0218d2c0545b\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.539596 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"339fe372-b3de-4832-b32f-0218d2c0545b\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.539636 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-bound-sa-token\") pod \"339fe372-b3de-4832-b32f-0218d2c0545b\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.539664 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-registry-tls\") pod \"339fe372-b3de-4832-b32f-0218d2c0545b\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.539697 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/339fe372-b3de-4832-b32f-0218d2c0545b-installation-pull-secrets\") pod \"339fe372-b3de-4832-b32f-0218d2c0545b\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.539762 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-trusted-ca\") pod \"339fe372-b3de-4832-b32f-0218d2c0545b\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.539787 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-registry-certificates\") pod \"339fe372-b3de-4832-b32f-0218d2c0545b\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.539806 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/339fe372-b3de-4832-b32f-0218d2c0545b-ca-trust-extracted\") pod \"339fe372-b3de-4832-b32f-0218d2c0545b\" (UID: \"339fe372-b3de-4832-b32f-0218d2c0545b\") " Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.541629 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "339fe372-b3de-4832-b32f-0218d2c0545b" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.541962 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "339fe372-b3de-4832-b32f-0218d2c0545b" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.544644 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339fe372-b3de-4832-b32f-0218d2c0545b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "339fe372-b3de-4832-b32f-0218d2c0545b" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.544748 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-kube-api-access-g77qx" (OuterVolumeSpecName: "kube-api-access-g77qx") pod "339fe372-b3de-4832-b32f-0218d2c0545b" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b"). InnerVolumeSpecName "kube-api-access-g77qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.545143 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "339fe372-b3de-4832-b32f-0218d2c0545b" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.546004 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "339fe372-b3de-4832-b32f-0218d2c0545b" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.555231 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "339fe372-b3de-4832-b32f-0218d2c0545b" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.564618 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339fe372-b3de-4832-b32f-0218d2c0545b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "339fe372-b3de-4832-b32f-0218d2c0545b" (UID: "339fe372-b3de-4832-b32f-0218d2c0545b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.641507 4845 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/339fe372-b3de-4832-b32f-0218d2c0545b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.641550 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g77qx\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-kube-api-access-g77qx\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.641562 4845 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.641574 4845 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/339fe372-b3de-4832-b32f-0218d2c0545b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.641586 4845 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/339fe372-b3de-4832-b32f-0218d2c0545b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.641595 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:04 crc kubenswrapper[4845]: I0202 10:38:04.641606 4845 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/339fe372-b3de-4832-b32f-0218d2c0545b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.167591 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"f7d75d05-4636-4623-b561-1b2e713ac513","Type":"ContainerStarted","Data":"187f0b0a3c316873ab9f0432efcf0de8df4d4671188a910657b770cedee2ce3e"} Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.182589 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"92c6b031-d11f-4f89-84e5-bbbe36ea3bba","Type":"ContainerStarted","Data":"bb59568d7e6dbfa618e30600b9ae3490b6053428641e757c82de3683040bd144"} Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.182641 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"92c6b031-d11f-4f89-84e5-bbbe36ea3bba","Type":"ContainerStarted","Data":"497b8db88a0d2799c943bdf6e0b2dd9ea3601a6319a6d26a89fe39d13e3935ad"} Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.182657 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"92c6b031-d11f-4f89-84e5-bbbe36ea3bba","Type":"ContainerStarted","Data":"082d4181cf6af1c706594ceca5f14af36c661156c164848bceea4b0433d7d800"} Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.182670 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"92c6b031-d11f-4f89-84e5-bbbe36ea3bba","Type":"ContainerStarted","Data":"c74028304991a6503334b572eae0b2b4d575fc9767d97841c6e56fc9ba5bea6c"} Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.182686 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"92c6b031-d11f-4f89-84e5-bbbe36ea3bba","Type":"ContainerStarted","Data":"44bfc48da59cb2a817c495e024cd0b7088079bba61ad395d1a49701cd6bdb38c"} Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.182697 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"92c6b031-d11f-4f89-84e5-bbbe36ea3bba","Type":"ContainerStarted","Data":"a67cb8fac6f2d8b0a183276567f2f8aff5b39592b775f317bc448fb884e0e5c2"} Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.183828 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" event={"ID":"339fe372-b3de-4832-b32f-0218d2c0545b","Type":"ContainerDied","Data":"b02dab70c0d575fa03917d15a317aa67ee29edaf5ecdee7aa9680da7630022b9"} Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.183875 4845 scope.go:117] "RemoveContainer" containerID="c850483e123504ea082a0c8f17db4867ee8a686d50fc85724804f4fd70d8bc85" Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.184000 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-thf72" Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.216061 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=10.954754352 podStartE2EDuration="20.216045593s" podCreationTimestamp="2026-02-02 10:37:45 +0000 UTC" firstStartedPulling="2026-02-02 10:37:49.646460004 +0000 UTC m=+350.737861454" lastFinishedPulling="2026-02-02 10:37:58.907751245 +0000 UTC m=+359.999152695" observedRunningTime="2026-02-02 10:38:05.213179786 +0000 UTC m=+366.304581236" watchObservedRunningTime="2026-02-02 10:38:05.216045593 +0000 UTC m=+366.307447043" Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.236758 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-thf72"] Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.244074 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-thf72"] Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.266085 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.267291787 podStartE2EDuration="15.266059793s" podCreationTimestamp="2026-02-02 10:37:50 +0000 UTC" firstStartedPulling="2026-02-02 10:37:52.319310246 +0000 UTC m=+353.410711696" lastFinishedPulling="2026-02-02 10:38:04.318078252 +0000 UTC m=+365.409479702" observedRunningTime="2026-02-02 10:38:05.263498245 +0000 UTC m=+366.354899705" watchObservedRunningTime="2026-02-02 10:38:05.266059793 +0000 UTC m=+366.357461243" Feb 02 10:38:05 crc kubenswrapper[4845]: I0202 10:38:05.725839 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339fe372-b3de-4832-b32f-0218d2c0545b" path="/var/lib/kubelet/pods/339fe372-b3de-4832-b32f-0218d2c0545b/volumes" Feb 02 10:38:06 crc kubenswrapper[4845]: I0202 10:38:06.284087 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:38:10 crc kubenswrapper[4845]: I0202 10:38:10.547971 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:38:10 crc kubenswrapper[4845]: I0202 10:38:10.548590 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:38:16 crc kubenswrapper[4845]: I0202 10:38:16.238229 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:38:16 crc kubenswrapper[4845]: I0202 10:38:16.238778 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.253278 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8gjpm" podUID="04d41e42-423a-4bac-bc05-3c424c978fd8" containerName="console" containerID="cri-o://966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976" gracePeriod=15 Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.610386 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8gjpm_04d41e42-423a-4bac-bc05-3c424c978fd8/console/0.log" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.610490 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.664400 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-console-config\") pod \"04d41e42-423a-4bac-bc05-3c424c978fd8\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.664477 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-service-ca\") pod \"04d41e42-423a-4bac-bc05-3c424c978fd8\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.664537 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-oauth-serving-cert\") pod \"04d41e42-423a-4bac-bc05-3c424c978fd8\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.664595 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-oauth-config\") pod \"04d41e42-423a-4bac-bc05-3c424c978fd8\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.664654 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-trusted-ca-bundle\") pod \"04d41e42-423a-4bac-bc05-3c424c978fd8\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.664694 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstpx\" (UniqueName: \"kubernetes.io/projected/04d41e42-423a-4bac-bc05-3c424c978fd8-kube-api-access-bstpx\") pod \"04d41e42-423a-4bac-bc05-3c424c978fd8\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.664828 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-serving-cert\") pod \"04d41e42-423a-4bac-bc05-3c424c978fd8\" (UID: \"04d41e42-423a-4bac-bc05-3c424c978fd8\") " Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.667492 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "04d41e42-423a-4bac-bc05-3c424c978fd8" (UID: "04d41e42-423a-4bac-bc05-3c424c978fd8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.667619 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-service-ca" (OuterVolumeSpecName: "service-ca") pod "04d41e42-423a-4bac-bc05-3c424c978fd8" (UID: "04d41e42-423a-4bac-bc05-3c424c978fd8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.667670 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-console-config" (OuterVolumeSpecName: "console-config") pod "04d41e42-423a-4bac-bc05-3c424c978fd8" (UID: "04d41e42-423a-4bac-bc05-3c424c978fd8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.667778 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "04d41e42-423a-4bac-bc05-3c424c978fd8" (UID: "04d41e42-423a-4bac-bc05-3c424c978fd8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.674268 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "04d41e42-423a-4bac-bc05-3c424c978fd8" (UID: "04d41e42-423a-4bac-bc05-3c424c978fd8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.674770 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d41e42-423a-4bac-bc05-3c424c978fd8-kube-api-access-bstpx" (OuterVolumeSpecName: "kube-api-access-bstpx") pod "04d41e42-423a-4bac-bc05-3c424c978fd8" (UID: "04d41e42-423a-4bac-bc05-3c424c978fd8"). InnerVolumeSpecName "kube-api-access-bstpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.680259 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "04d41e42-423a-4bac-bc05-3c424c978fd8" (UID: "04d41e42-423a-4bac-bc05-3c424c978fd8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.767485 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.767526 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bstpx\" (UniqueName: \"kubernetes.io/projected/04d41e42-423a-4bac-bc05-3c424c978fd8-kube-api-access-bstpx\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.767536 4845 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.767544 4845 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.767553 4845 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.767561 4845 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04d41e42-423a-4bac-bc05-3c424c978fd8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:25 crc kubenswrapper[4845]: I0202 10:38:25.767569 4845 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04d41e42-423a-4bac-bc05-3c424c978fd8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.317164 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8gjpm_04d41e42-423a-4bac-bc05-3c424c978fd8/console/0.log" Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.317272 4845 generic.go:334] "Generic (PLEG): container finished" podID="04d41e42-423a-4bac-bc05-3c424c978fd8" containerID="966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976" exitCode=2 Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.317338 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8gjpm" event={"ID":"04d41e42-423a-4bac-bc05-3c424c978fd8","Type":"ContainerDied","Data":"966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976"} Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.317382 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8gjpm" event={"ID":"04d41e42-423a-4bac-bc05-3c424c978fd8","Type":"ContainerDied","Data":"983de89da05d00215d904c21a6798495b0919b3a61d3d8890b2f47a3f16bcb7f"} Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.317405 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8gjpm" Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.317435 4845 scope.go:117] "RemoveContainer" containerID="966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976" Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.337768 4845 scope.go:117] "RemoveContainer" containerID="966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976" Feb 02 10:38:26 crc kubenswrapper[4845]: E0202 10:38:26.338784 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976\": container with ID starting with 966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976 not found: ID does not exist" containerID="966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976" Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.338817 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976"} err="failed to get container status \"966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976\": rpc error: code = NotFound desc = could not find container \"966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976\": container with ID starting with 966d7e4cb3833109e9bffa9e6cf13b2c4adf6ec09ec74e2fce7c96aacec9b976 not found: ID does not exist" Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.346076 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8gjpm"] Feb 02 10:38:26 crc kubenswrapper[4845]: I0202 10:38:26.352354 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8gjpm"] Feb 02 10:38:27 crc kubenswrapper[4845]: I0202 10:38:27.722168 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d41e42-423a-4bac-bc05-3c424c978fd8" path="/var/lib/kubelet/pods/04d41e42-423a-4bac-bc05-3c424c978fd8/volumes" Feb 02 10:38:30 crc kubenswrapper[4845]: I0202 10:38:30.555801 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:38:30 crc kubenswrapper[4845]: I0202 10:38:30.559952 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:38:46 crc kubenswrapper[4845]: I0202 10:38:46.237478 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:38:46 crc kubenswrapper[4845]: I0202 10:38:46.238280 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:38:46 crc kubenswrapper[4845]: I0202 10:38:46.238352 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:38:46 crc kubenswrapper[4845]: I0202 10:38:46.239187 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df9230c12c17f28801d9b1be21f07e2881dfba8fde329097a5e90d09e1d981f3"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:38:46 crc kubenswrapper[4845]: I0202 10:38:46.239303 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://df9230c12c17f28801d9b1be21f07e2881dfba8fde329097a5e90d09e1d981f3" gracePeriod=600 Feb 02 10:38:46 crc kubenswrapper[4845]: I0202 10:38:46.457225 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="df9230c12c17f28801d9b1be21f07e2881dfba8fde329097a5e90d09e1d981f3" exitCode=0 Feb 02 10:38:46 crc kubenswrapper[4845]: I0202 10:38:46.457625 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"df9230c12c17f28801d9b1be21f07e2881dfba8fde329097a5e90d09e1d981f3"} Feb 02 10:38:46 crc kubenswrapper[4845]: I0202 10:38:46.457667 4845 scope.go:117] "RemoveContainer" containerID="5c10f188dedfdab51fcb9b4bb57eb7eba62d3895c360748e15e01ffa4d95a428" Feb 02 10:38:47 crc kubenswrapper[4845]: I0202 10:38:47.467617 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"511b5a9de737657a9a1ff84c736b95abf52206e96ffdc8cf5decfdca7aa28582"} Feb 02 10:38:51 crc kubenswrapper[4845]: I0202 10:38:51.283806 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:38:51 crc kubenswrapper[4845]: I0202 10:38:51.317992 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:38:51 crc kubenswrapper[4845]: I0202 10:38:51.523543 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.889211 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-76d65679c8-9dw7p"] Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.890268 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" podUID="ec05463b-fba2-442c-9ba3-893de7b61f92" containerName="metrics-server" containerID="cri-o://a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b" gracePeriod=170 Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.894661 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-56bfc86c66-lp8fz"] Feb 02 10:39:08 crc kubenswrapper[4845]: E0202 10:39:08.895131 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339fe372-b3de-4832-b32f-0218d2c0545b" containerName="registry" Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.895156 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="339fe372-b3de-4832-b32f-0218d2c0545b" containerName="registry" Feb 02 10:39:08 crc kubenswrapper[4845]: E0202 10:39:08.895169 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d41e42-423a-4bac-bc05-3c424c978fd8" containerName="console" Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.895179 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d41e42-423a-4bac-bc05-3c424c978fd8" containerName="console" Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.895319 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d41e42-423a-4bac-bc05-3c424c978fd8" containerName="console" Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.895343 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="339fe372-b3de-4832-b32f-0218d2c0545b" containerName="registry" Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.895911 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.903552 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-b166r02mp1sa7" Feb 02 10:39:08 crc kubenswrapper[4845]: I0202 10:39:08.918783 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-56bfc86c66-lp8fz"] Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.093915 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-secret-metrics-client-certs\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.093990 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7e5424af-7d57-4d12-be1c-dcddc1187cdf-audit-log\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.094017 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-secret-metrics-server-tls\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.094045 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnswl\" (UniqueName: \"kubernetes.io/projected/7e5424af-7d57-4d12-be1c-dcddc1187cdf-kube-api-access-rnswl\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.094067 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-client-ca-bundle\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.094220 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7e5424af-7d57-4d12-be1c-dcddc1187cdf-metrics-server-audit-profiles\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.094529 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5424af-7d57-4d12-be1c-dcddc1187cdf-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.195479 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-secret-metrics-client-certs\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.195539 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7e5424af-7d57-4d12-be1c-dcddc1187cdf-audit-log\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.195579 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-secret-metrics-server-tls\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.195611 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-client-ca-bundle\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.195635 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnswl\" (UniqueName: \"kubernetes.io/projected/7e5424af-7d57-4d12-be1c-dcddc1187cdf-kube-api-access-rnswl\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.195666 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7e5424af-7d57-4d12-be1c-dcddc1187cdf-metrics-server-audit-profiles\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.195731 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5424af-7d57-4d12-be1c-dcddc1187cdf-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.196182 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7e5424af-7d57-4d12-be1c-dcddc1187cdf-audit-log\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.196821 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5424af-7d57-4d12-be1c-dcddc1187cdf-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.197332 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7e5424af-7d57-4d12-be1c-dcddc1187cdf-metrics-server-audit-profiles\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.202140 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-secret-metrics-server-tls\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.202231 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-client-ca-bundle\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.211583 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7e5424af-7d57-4d12-be1c-dcddc1187cdf-secret-metrics-client-certs\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.216094 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnswl\" (UniqueName: \"kubernetes.io/projected/7e5424af-7d57-4d12-be1c-dcddc1187cdf-kube-api-access-rnswl\") pod \"metrics-server-56bfc86c66-lp8fz\" (UID: \"7e5424af-7d57-4d12-be1c-dcddc1187cdf\") " pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.220408 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.508733 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-56bfc86c66-lp8fz"] Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.606695 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" event={"ID":"7e5424af-7d57-4d12-be1c-dcddc1187cdf","Type":"ContainerStarted","Data":"5008ae273e8c2ed3bc215d704a8cb8a32031a8d776caa8504f953ec686197e97"} Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.832020 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs"] Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.842731 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.879981 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs"] Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.884640 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm"] Feb 02 10:39:09 crc kubenswrapper[4845]: I0202 10:39:09.884947 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" podUID="f5753406-2b60-4929-85b3-8e01c37218b3" containerName="monitoring-plugin" containerID="cri-o://d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4" gracePeriod=30 Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.009967 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4bd9b974-5ead-4e20-ae4a-724c03f0838d-monitoring-plugin-cert\") pod \"monitoring-plugin-75c47d7fdf-fpzvs\" (UID: \"4bd9b974-5ead-4e20-ae4a-724c03f0838d\") " pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.111331 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4bd9b974-5ead-4e20-ae4a-724c03f0838d-monitoring-plugin-cert\") pod \"monitoring-plugin-75c47d7fdf-fpzvs\" (UID: \"4bd9b974-5ead-4e20-ae4a-724c03f0838d\") " pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.117209 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4bd9b974-5ead-4e20-ae4a-724c03f0838d-monitoring-plugin-cert\") pod \"monitoring-plugin-75c47d7fdf-fpzvs\" (UID: \"4bd9b974-5ead-4e20-ae4a-724c03f0838d\") " pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.202257 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.308456 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-db4cbb94b-xntgm_f5753406-2b60-4929-85b3-8e01c37218b3/monitoring-plugin/0.log" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.308535 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.415568 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f5753406-2b60-4929-85b3-8e01c37218b3-monitoring-plugin-cert\") pod \"f5753406-2b60-4929-85b3-8e01c37218b3\" (UID: \"f5753406-2b60-4929-85b3-8e01c37218b3\") " Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.419020 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5753406-2b60-4929-85b3-8e01c37218b3-monitoring-plugin-cert" (OuterVolumeSpecName: "monitoring-plugin-cert") pod "f5753406-2b60-4929-85b3-8e01c37218b3" (UID: "f5753406-2b60-4929-85b3-8e01c37218b3"). InnerVolumeSpecName "monitoring-plugin-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.431753 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs"] Feb 02 10:39:10 crc kubenswrapper[4845]: W0202 10:39:10.437089 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd9b974_5ead_4e20_ae4a_724c03f0838d.slice/crio-c08c16b57884b84cb1b2d213f565267332bfb8756cb5ff46d283053ffd895bd3 WatchSource:0}: Error finding container c08c16b57884b84cb1b2d213f565267332bfb8756cb5ff46d283053ffd895bd3: Status 404 returned error can't find the container with id c08c16b57884b84cb1b2d213f565267332bfb8756cb5ff46d283053ffd895bd3 Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.516808 4845 reconciler_common.go:293] "Volume detached for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f5753406-2b60-4929-85b3-8e01c37218b3-monitoring-plugin-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.612746 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" event={"ID":"7e5424af-7d57-4d12-be1c-dcddc1187cdf","Type":"ContainerStarted","Data":"185fe07283a7c1977eb081babf13cfa850cf10bbd1e0d246cfdc0846b9337db7"} Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.614289 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" event={"ID":"4bd9b974-5ead-4e20-ae4a-724c03f0838d","Type":"ContainerStarted","Data":"0f067f4506778e52e1cf90e598245458d2f6aff5c06048ae0bdd3f8ee36b053e"} Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.614324 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" event={"ID":"4bd9b974-5ead-4e20-ae4a-724c03f0838d","Type":"ContainerStarted","Data":"c08c16b57884b84cb1b2d213f565267332bfb8756cb5ff46d283053ffd895bd3"} Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.617119 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-db4cbb94b-xntgm_f5753406-2b60-4929-85b3-8e01c37218b3/monitoring-plugin/0.log" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.617162 4845 generic.go:334] "Generic (PLEG): container finished" podID="f5753406-2b60-4929-85b3-8e01c37218b3" containerID="d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4" exitCode=2 Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.617185 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" event={"ID":"f5753406-2b60-4929-85b3-8e01c37218b3","Type":"ContainerDied","Data":"d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4"} Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.617204 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" event={"ID":"f5753406-2b60-4929-85b3-8e01c37218b3","Type":"ContainerDied","Data":"7967ac4b496b366ce99f5e32cc502e0dcf108c64a09981d769ca1e0919a03dd6"} Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.617223 4845 scope.go:117] "RemoveContainer" containerID="d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.617242 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.633820 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" podStartSLOduration=2.633802236 podStartE2EDuration="2.633802236s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:39:10.628744688 +0000 UTC m=+431.720146158" watchObservedRunningTime="2026-02-02 10:39:10.633802236 +0000 UTC m=+431.725203686" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.639520 4845 scope.go:117] "RemoveContainer" containerID="d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4" Feb 02 10:39:10 crc kubenswrapper[4845]: E0202 10:39:10.640499 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4\": container with ID starting with d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4 not found: ID does not exist" containerID="d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.640561 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4"} err="failed to get container status \"d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4\": rpc error: code = NotFound desc = could not find container \"d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4\": container with ID starting with d6a4724319214b5af799ddf12176c0892718b0b92f3f9bc2e5eb114ce5ea3ae4 not found: ID does not exist" Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.657227 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm"] Feb 02 10:39:10 crc kubenswrapper[4845]: I0202 10:39:10.661513 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/monitoring-plugin-db4cbb94b-xntgm"] Feb 02 10:39:11 crc kubenswrapper[4845]: I0202 10:39:11.720266 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5753406-2b60-4929-85b3-8e01c37218b3" path="/var/lib/kubelet/pods/f5753406-2b60-4929-85b3-8e01c37218b3/volumes" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.203551 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.209029 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.226576 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-75c47d7fdf-fpzvs" podStartSLOduration=11.226549728 podStartE2EDuration="11.226549728s" podCreationTimestamp="2026-02-02 10:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:39:11.635993541 +0000 UTC m=+432.727394991" watchObservedRunningTime="2026-02-02 10:39:20.226549728 +0000 UTC m=+441.317951208" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.334874 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-dd6cc54dd-nz852"] Feb 02 10:39:20 crc kubenswrapper[4845]: E0202 10:39:20.335146 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5753406-2b60-4929-85b3-8e01c37218b3" containerName="monitoring-plugin" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.335158 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5753406-2b60-4929-85b3-8e01c37218b3" containerName="monitoring-plugin" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.335316 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5753406-2b60-4929-85b3-8e01c37218b3" containerName="monitoring-plugin" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.335919 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.355375 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-dd6cc54dd-nz852"] Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.524584 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9qjk\" (UniqueName: \"kubernetes.io/projected/760b8b36-f06d-49ac-9de5-72b222f509d0-kube-api-access-f9qjk\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.524736 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-trusted-ca-bundle\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.524783 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-oauth-config\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.524814 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-console-config\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.524841 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-oauth-serving-cert\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.524869 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-serving-cert\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.524942 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-service-ca\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.627027 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-trusted-ca-bundle\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.627119 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-oauth-config\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.627153 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-console-config\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.627187 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-oauth-serving-cert\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.627223 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-serving-cert\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.627276 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-service-ca\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.627395 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9qjk\" (UniqueName: \"kubernetes.io/projected/760b8b36-f06d-49ac-9de5-72b222f509d0-kube-api-access-f9qjk\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.628825 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-oauth-serving-cert\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.628825 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-console-config\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.629009 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-service-ca\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.629158 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-trusted-ca-bundle\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.634036 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-oauth-config\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.634358 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-serving-cert\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.652440 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9qjk\" (UniqueName: \"kubernetes.io/projected/760b8b36-f06d-49ac-9de5-72b222f509d0-kube-api-access-f9qjk\") pod \"console-dd6cc54dd-nz852\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.660724 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:20 crc kubenswrapper[4845]: I0202 10:39:20.856060 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-dd6cc54dd-nz852"] Feb 02 10:39:21 crc kubenswrapper[4845]: I0202 10:39:21.685804 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dd6cc54dd-nz852" event={"ID":"760b8b36-f06d-49ac-9de5-72b222f509d0","Type":"ContainerStarted","Data":"57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d"} Feb 02 10:39:21 crc kubenswrapper[4845]: I0202 10:39:21.686157 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dd6cc54dd-nz852" event={"ID":"760b8b36-f06d-49ac-9de5-72b222f509d0","Type":"ContainerStarted","Data":"3bdbcedf982f353d1f33f9e0800794674194143b9b56bad7bb0604d71624ce6e"} Feb 02 10:39:21 crc kubenswrapper[4845]: I0202 10:39:21.713116 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-dd6cc54dd-nz852" podStartSLOduration=1.713097883 podStartE2EDuration="1.713097883s" podCreationTimestamp="2026-02-02 10:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:39:21.710253309 +0000 UTC m=+442.801654799" watchObservedRunningTime="2026-02-02 10:39:21.713097883 +0000 UTC m=+442.804499333" Feb 02 10:39:29 crc kubenswrapper[4845]: I0202 10:39:29.221266 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:29 crc kubenswrapper[4845]: I0202 10:39:29.221846 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:29 crc kubenswrapper[4845]: I0202 10:39:29.226955 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:29 crc kubenswrapper[4845]: I0202 10:39:29.737875 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-56bfc86c66-lp8fz" Feb 02 10:39:30 crc kubenswrapper[4845]: I0202 10:39:30.662303 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:30 crc kubenswrapper[4845]: I0202 10:39:30.662357 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:30 crc kubenswrapper[4845]: I0202 10:39:30.670032 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:30 crc kubenswrapper[4845]: I0202 10:39:30.744681 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:39:30 crc kubenswrapper[4845]: I0202 10:39:30.816197 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-986c9fb64-5l8tt"] Feb 02 10:39:55 crc kubenswrapper[4845]: I0202 10:39:55.861577 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-986c9fb64-5l8tt" podUID="615f8561-b519-43b6-8864-9b1275443e98" containerName="console" containerID="cri-o://9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38" gracePeriod=15 Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.209990 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-986c9fb64-5l8tt_615f8561-b519-43b6-8864-9b1275443e98/console/0.log" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.210417 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.381140 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-serving-cert\") pod \"615f8561-b519-43b6-8864-9b1275443e98\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.381334 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-oauth-serving-cert\") pod \"615f8561-b519-43b6-8864-9b1275443e98\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.381368 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-trusted-ca-bundle\") pod \"615f8561-b519-43b6-8864-9b1275443e98\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.381425 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-service-ca\") pod \"615f8561-b519-43b6-8864-9b1275443e98\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.381447 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhj26\" (UniqueName: \"kubernetes.io/projected/615f8561-b519-43b6-8864-9b1275443e98-kube-api-access-mhj26\") pod \"615f8561-b519-43b6-8864-9b1275443e98\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.381719 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-oauth-config\") pod \"615f8561-b519-43b6-8864-9b1275443e98\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.381749 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-console-config\") pod \"615f8561-b519-43b6-8864-9b1275443e98\" (UID: \"615f8561-b519-43b6-8864-9b1275443e98\") " Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.384580 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-service-ca" (OuterVolumeSpecName: "service-ca") pod "615f8561-b519-43b6-8864-9b1275443e98" (UID: "615f8561-b519-43b6-8864-9b1275443e98"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.384996 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "615f8561-b519-43b6-8864-9b1275443e98" (UID: "615f8561-b519-43b6-8864-9b1275443e98"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.385424 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "615f8561-b519-43b6-8864-9b1275443e98" (UID: "615f8561-b519-43b6-8864-9b1275443e98"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.386128 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-console-config" (OuterVolumeSpecName: "console-config") pod "615f8561-b519-43b6-8864-9b1275443e98" (UID: "615f8561-b519-43b6-8864-9b1275443e98"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.387318 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "615f8561-b519-43b6-8864-9b1275443e98" (UID: "615f8561-b519-43b6-8864-9b1275443e98"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.388109 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "615f8561-b519-43b6-8864-9b1275443e98" (UID: "615f8561-b519-43b6-8864-9b1275443e98"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.388630 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615f8561-b519-43b6-8864-9b1275443e98-kube-api-access-mhj26" (OuterVolumeSpecName: "kube-api-access-mhj26") pod "615f8561-b519-43b6-8864-9b1275443e98" (UID: "615f8561-b519-43b6-8864-9b1275443e98"). InnerVolumeSpecName "kube-api-access-mhj26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.483724 4845 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.483756 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhj26\" (UniqueName: \"kubernetes.io/projected/615f8561-b519-43b6-8864-9b1275443e98-kube-api-access-mhj26\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.483769 4845 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.483778 4845 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.483787 4845 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/615f8561-b519-43b6-8864-9b1275443e98-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.483795 4845 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.483803 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/615f8561-b519-43b6-8864-9b1275443e98-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.906312 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-986c9fb64-5l8tt_615f8561-b519-43b6-8864-9b1275443e98/console/0.log" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.907108 4845 generic.go:334] "Generic (PLEG): container finished" podID="615f8561-b519-43b6-8864-9b1275443e98" containerID="9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38" exitCode=2 Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.907144 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-986c9fb64-5l8tt" event={"ID":"615f8561-b519-43b6-8864-9b1275443e98","Type":"ContainerDied","Data":"9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38"} Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.907163 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-986c9fb64-5l8tt" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.907183 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-986c9fb64-5l8tt" event={"ID":"615f8561-b519-43b6-8864-9b1275443e98","Type":"ContainerDied","Data":"3fbb867d7ca80df191b9729b7982330f959b62e993162b6e2d6a5e4f58aec846"} Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.907200 4845 scope.go:117] "RemoveContainer" containerID="9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.925163 4845 scope.go:117] "RemoveContainer" containerID="9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38" Feb 02 10:39:56 crc kubenswrapper[4845]: E0202 10:39:56.925619 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38\": container with ID starting with 9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38 not found: ID does not exist" containerID="9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.925667 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38"} err="failed to get container status \"9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38\": rpc error: code = NotFound desc = could not find container \"9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38\": container with ID starting with 9a52fdac0cb24cbbe17aceb8d7148b4a37460fce98376930f490c346042e8c38 not found: ID does not exist" Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.934455 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-986c9fb64-5l8tt"] Feb 02 10:39:56 crc kubenswrapper[4845]: I0202 10:39:56.939520 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-986c9fb64-5l8tt"] Feb 02 10:39:57 crc kubenswrapper[4845]: I0202 10:39:57.720871 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="615f8561-b519-43b6-8864-9b1275443e98" path="/var/lib/kubelet/pods/615f8561-b519-43b6-8864-9b1275443e98/volumes" Feb 02 10:40:46 crc kubenswrapper[4845]: I0202 10:40:46.237997 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:40:46 crc kubenswrapper[4845]: I0202 10:40:46.238643 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:41:16 crc kubenswrapper[4845]: I0202 10:41:16.237795 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:41:16 crc kubenswrapper[4845]: I0202 10:41:16.238378 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.330877 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw"] Feb 02 10:41:29 crc kubenswrapper[4845]: E0202 10:41:29.331684 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615f8561-b519-43b6-8864-9b1275443e98" containerName="console" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.331700 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="615f8561-b519-43b6-8864-9b1275443e98" containerName="console" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.331832 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="615f8561-b519-43b6-8864-9b1275443e98" containerName="console" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.332723 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.334905 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.353627 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw"] Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.526696 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.526777 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.526812 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrpl\" (UniqueName: \"kubernetes.io/projected/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-kube-api-access-znrpl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.628958 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.629029 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.629063 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrpl\" (UniqueName: \"kubernetes.io/projected/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-kube-api-access-znrpl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.629427 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.629502 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.649282 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrpl\" (UniqueName: \"kubernetes.io/projected/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-kube-api-access-znrpl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.651782 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:29 crc kubenswrapper[4845]: I0202 10:41:29.881353 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw"] Feb 02 10:41:30 crc kubenswrapper[4845]: I0202 10:41:30.508266 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" event={"ID":"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907","Type":"ContainerStarted","Data":"97729e603ef0e8003e95db2d6b3b5d9b000b47fc6a0f065d959728f4b291e06f"} Feb 02 10:41:30 crc kubenswrapper[4845]: I0202 10:41:30.508321 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" event={"ID":"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907","Type":"ContainerStarted","Data":"af01be384acc5c4c309613560eaef002ab894f86be6972102ca68e7f9316fbb7"} Feb 02 10:41:31 crc kubenswrapper[4845]: I0202 10:41:31.514541 4845 generic.go:334] "Generic (PLEG): container finished" podID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerID="97729e603ef0e8003e95db2d6b3b5d9b000b47fc6a0f065d959728f4b291e06f" exitCode=0 Feb 02 10:41:31 crc kubenswrapper[4845]: I0202 10:41:31.514607 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" event={"ID":"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907","Type":"ContainerDied","Data":"97729e603ef0e8003e95db2d6b3b5d9b000b47fc6a0f065d959728f4b291e06f"} Feb 02 10:41:31 crc kubenswrapper[4845]: I0202 10:41:31.516771 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:41:33 crc kubenswrapper[4845]: I0202 10:41:33.528051 4845 generic.go:334] "Generic (PLEG): container finished" podID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerID="99242635b3545b38c06ae8cd903b16c6a2e4f31e02fd592ca7f3bde980c6beed" exitCode=0 Feb 02 10:41:33 crc kubenswrapper[4845]: I0202 10:41:33.528152 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" event={"ID":"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907","Type":"ContainerDied","Data":"99242635b3545b38c06ae8cd903b16c6a2e4f31e02fd592ca7f3bde980c6beed"} Feb 02 10:41:34 crc kubenswrapper[4845]: I0202 10:41:34.538358 4845 generic.go:334] "Generic (PLEG): container finished" podID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerID="06718d605415f80d8e16cf0b3a290439b070e84c0fec256aa8f7f7566836a91c" exitCode=0 Feb 02 10:41:34 crc kubenswrapper[4845]: I0202 10:41:34.538411 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" event={"ID":"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907","Type":"ContainerDied","Data":"06718d605415f80d8e16cf0b3a290439b070e84c0fec256aa8f7f7566836a91c"} Feb 02 10:41:35 crc kubenswrapper[4845]: I0202 10:41:35.813351 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:35 crc kubenswrapper[4845]: I0202 10:41:35.920606 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znrpl\" (UniqueName: \"kubernetes.io/projected/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-kube-api-access-znrpl\") pod \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " Feb 02 10:41:35 crc kubenswrapper[4845]: I0202 10:41:35.920693 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-util\") pod \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " Feb 02 10:41:35 crc kubenswrapper[4845]: I0202 10:41:35.921068 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-bundle\") pod \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\" (UID: \"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907\") " Feb 02 10:41:35 crc kubenswrapper[4845]: I0202 10:41:35.923294 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-bundle" (OuterVolumeSpecName: "bundle") pod "fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" (UID: "fe3cf6fe-df9c-4484-a6af-75fe0b5fa907"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:41:35 crc kubenswrapper[4845]: I0202 10:41:35.928026 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-kube-api-access-znrpl" (OuterVolumeSpecName: "kube-api-access-znrpl") pod "fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" (UID: "fe3cf6fe-df9c-4484-a6af-75fe0b5fa907"). InnerVolumeSpecName "kube-api-access-znrpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:35 crc kubenswrapper[4845]: I0202 10:41:35.934990 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-util" (OuterVolumeSpecName: "util") pod "fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" (UID: "fe3cf6fe-df9c-4484-a6af-75fe0b5fa907"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:41:36 crc kubenswrapper[4845]: I0202 10:41:36.022904 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znrpl\" (UniqueName: \"kubernetes.io/projected/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-kube-api-access-znrpl\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:36 crc kubenswrapper[4845]: I0202 10:41:36.022939 4845 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:36 crc kubenswrapper[4845]: I0202 10:41:36.022948 4845 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe3cf6fe-df9c-4484-a6af-75fe0b5fa907-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:36 crc kubenswrapper[4845]: I0202 10:41:36.569172 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" event={"ID":"fe3cf6fe-df9c-4484-a6af-75fe0b5fa907","Type":"ContainerDied","Data":"af01be384acc5c4c309613560eaef002ab894f86be6972102ca68e7f9316fbb7"} Feb 02 10:41:36 crc kubenswrapper[4845]: I0202 10:41:36.569229 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af01be384acc5c4c309613560eaef002ab894f86be6972102ca68e7f9316fbb7" Feb 02 10:41:36 crc kubenswrapper[4845]: I0202 10:41:36.569271 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.271855 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.386383 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-configmap-kubelet-serving-ca-bundle\") pod \"ec05463b-fba2-442c-9ba3-893de7b61f92\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387053 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfgfq\" (UniqueName: \"kubernetes.io/projected/ec05463b-fba2-442c-9ba3-893de7b61f92-kube-api-access-zfgfq\") pod \"ec05463b-fba2-442c-9ba3-893de7b61f92\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387161 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-metrics-server-audit-profiles\") pod \"ec05463b-fba2-442c-9ba3-893de7b61f92\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387179 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "ec05463b-fba2-442c-9ba3-893de7b61f92" (UID: "ec05463b-fba2-442c-9ba3-893de7b61f92"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387211 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-server-tls\") pod \"ec05463b-fba2-442c-9ba3-893de7b61f92\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387269 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-client-certs\") pod \"ec05463b-fba2-442c-9ba3-893de7b61f92\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387322 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ec05463b-fba2-442c-9ba3-893de7b61f92-audit-log\") pod \"ec05463b-fba2-442c-9ba3-893de7b61f92\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387361 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-client-ca-bundle\") pod \"ec05463b-fba2-442c-9ba3-893de7b61f92\" (UID: \"ec05463b-fba2-442c-9ba3-893de7b61f92\") " Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387623 4845 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-configmap-kubelet-serving-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.387803 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "ec05463b-fba2-442c-9ba3-893de7b61f92" (UID: "ec05463b-fba2-442c-9ba3-893de7b61f92"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.388632 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec05463b-fba2-442c-9ba3-893de7b61f92-audit-log" (OuterVolumeSpecName: "audit-log") pod "ec05463b-fba2-442c-9ba3-893de7b61f92" (UID: "ec05463b-fba2-442c-9ba3-893de7b61f92"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.393176 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "ec05463b-fba2-442c-9ba3-893de7b61f92" (UID: "ec05463b-fba2-442c-9ba3-893de7b61f92"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.393266 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "ec05463b-fba2-442c-9ba3-893de7b61f92" (UID: "ec05463b-fba2-442c-9ba3-893de7b61f92"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.393614 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "ec05463b-fba2-442c-9ba3-893de7b61f92" (UID: "ec05463b-fba2-442c-9ba3-893de7b61f92"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.393907 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec05463b-fba2-442c-9ba3-893de7b61f92-kube-api-access-zfgfq" (OuterVolumeSpecName: "kube-api-access-zfgfq") pod "ec05463b-fba2-442c-9ba3-893de7b61f92" (UID: "ec05463b-fba2-442c-9ba3-893de7b61f92"). InnerVolumeSpecName "kube-api-access-zfgfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.495919 4845 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-client-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.495963 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfgfq\" (UniqueName: \"kubernetes.io/projected/ec05463b-fba2-442c-9ba3-893de7b61f92-kube-api-access-zfgfq\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.495981 4845 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ec05463b-fba2-442c-9ba3-893de7b61f92-metrics-server-audit-profiles\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.495996 4845 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-server-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.496010 4845 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ec05463b-fba2-442c-9ba3-893de7b61f92-secret-metrics-client-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.496022 4845 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ec05463b-fba2-442c-9ba3-893de7b61f92-audit-log\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.585699 4845 generic.go:334] "Generic (PLEG): container finished" podID="ec05463b-fba2-442c-9ba3-893de7b61f92" containerID="a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b" exitCode=0 Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.585805 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" event={"ID":"ec05463b-fba2-442c-9ba3-893de7b61f92","Type":"ContainerDied","Data":"a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b"} Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.585864 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" event={"ID":"ec05463b-fba2-442c-9ba3-893de7b61f92","Type":"ContainerDied","Data":"9c81f12ccf75970d5f3c58822634af178231901d9efb8c97f05d144b2887fc22"} Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.585913 4845 scope.go:117] "RemoveContainer" containerID="a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.586075 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76d65679c8-9dw7p" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.621129 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-76d65679c8-9dw7p"] Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.621184 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-76d65679c8-9dw7p"] Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.621292 4845 scope.go:117] "RemoveContainer" containerID="a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b" Feb 02 10:41:39 crc kubenswrapper[4845]: E0202 10:41:39.622857 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b\": container with ID starting with a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b not found: ID does not exist" containerID="a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.622935 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b"} err="failed to get container status \"a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b\": rpc error: code = NotFound desc = could not find container \"a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b\": container with ID starting with a4dacca20c6457fa0330baeb2e48ff27f134770904167edbaba74443ba0efc3b not found: ID does not exist" Feb 02 10:41:39 crc kubenswrapper[4845]: I0202 10:41:39.721501 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec05463b-fba2-442c-9ba3-893de7b61f92" path="/var/lib/kubelet/pods/ec05463b-fba2-442c-9ba3-893de7b61f92/volumes" Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.181945 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sh5vd"] Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.182482 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="nbdb" containerID="cri-o://409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" gracePeriod=30 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.182505 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" gracePeriod=30 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.182613 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovn-acl-logging" containerID="cri-o://2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3" gracePeriod=30 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.182668 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="sbdb" containerID="cri-o://02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" gracePeriod=30 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.182651 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="northd" containerID="cri-o://74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" gracePeriod=30 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.182644 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kube-rbac-proxy-node" containerID="cri-o://8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" gracePeriod=30 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.182979 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovn-controller" containerID="cri-o://36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb" gracePeriod=30 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.246135 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" containerID="cri-o://e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" gracePeriod=30 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.595086 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/2.log" Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.595598 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/1.log" Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.595649 4845 generic.go:334] "Generic (PLEG): container finished" podID="310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3" containerID="276b266e719606f3b154e5d01310e24320f7c03059107da4aad492d36a95867b" exitCode=2 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.595723 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzwst" event={"ID":"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3","Type":"ContainerDied","Data":"276b266e719606f3b154e5d01310e24320f7c03059107da4aad492d36a95867b"} Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.595773 4845 scope.go:117] "RemoveContainer" containerID="a7524e10c6d21267ccb31b5667ff6f876f7954e0b4cfca364afc003cb525513f" Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.596385 4845 scope.go:117] "RemoveContainer" containerID="276b266e719606f3b154e5d01310e24320f7c03059107da4aad492d36a95867b" Feb 02 10:41:40 crc kubenswrapper[4845]: E0202 10:41:40.596707 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kzwst_openshift-multus(310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3)\"" pod="openshift-multus/multus-kzwst" podUID="310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3" Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.598855 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovnkube-controller/3.log" Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.600906 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovn-acl-logging/0.log" Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.601350 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovn-controller/0.log" Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.601774 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" exitCode=0 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.601791 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3" exitCode=143 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.601801 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb" exitCode=143 Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.601820 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0"} Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.601843 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3"} Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.601854 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb"} Feb 02 10:41:40 crc kubenswrapper[4845]: I0202 10:41:40.660795 4845 scope.go:117] "RemoveContainer" containerID="b1031f0edf0ca31e33f0e6971035d19d313184cece6b93477345110db5db994a" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.431645 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovn-acl-logging/0.log" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.432337 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovn-controller/0.log" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.433573 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524713 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-kubelet\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524792 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-ovn-kubernetes\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524822 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-systemd-units\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524843 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-ovn\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524846 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524919 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-config\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524951 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-env-overrides\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524950 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524975 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-script-lib\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.524984 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525002 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovn-node-metrics-cert\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525030 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-systemd\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525066 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdgfw\" (UniqueName: \"kubernetes.io/projected/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-kube-api-access-wdgfw\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525102 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-netd\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525131 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525163 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-var-lib-openvswitch\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525192 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-openvswitch\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525217 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-slash\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525243 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-bin\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525265 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-log-socket\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525284 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-node-log\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525308 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-netns\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525325 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-etc-openvswitch\") pod \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\" (UID: \"7b93b041-3f3f-47ba-a9d4-d09de1b326dc\") " Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525525 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525593 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525629 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525656 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525672 4845 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525688 4845 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525701 4845 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525712 4845 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525747 4845 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525778 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525805 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525828 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-slash" (OuterVolumeSpecName: "host-slash") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525853 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525876 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-log-socket" (OuterVolumeSpecName: "log-socket") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525917 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-node-log" (OuterVolumeSpecName: "node-log") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525925 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.525942 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.526195 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.526419 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539397 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zgmmp"] Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539661 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovn-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539675 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovn-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539684 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovn-acl-logging" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539689 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovn-acl-logging" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539696 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerName="pull" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539702 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerName="pull" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539711 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539717 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539725 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539730 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539738 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerName="util" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539743 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerName="util" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539751 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539756 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539766 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539773 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539781 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kubecfg-setup" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539786 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kubecfg-setup" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539793 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="northd" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539798 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="northd" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539805 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="nbdb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539810 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="nbdb" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539816 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kube-rbac-proxy-node" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539821 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kube-rbac-proxy-node" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539832 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerName="extract" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539837 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerName="extract" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539849 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec05463b-fba2-442c-9ba3-893de7b61f92" containerName="metrics-server" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539854 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec05463b-fba2-442c-9ba3-893de7b61f92" containerName="metrics-server" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.539862 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="sbdb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539867 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="sbdb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.539991 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540006 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540014 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="northd" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540026 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kube-rbac-proxy-node" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540036 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540046 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovn-acl-logging" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540053 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="nbdb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540060 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="sbdb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540067 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3cf6fe-df9c-4484-a6af-75fe0b5fa907" containerName="extract" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540076 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540082 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540090 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovn-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540099 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec05463b-fba2-442c-9ba3-893de7b61f92" containerName="metrics-server" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.540202 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540208 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.540218 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540223 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.540321 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerName="ovnkube-controller" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.542379 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.542363 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-kube-api-access-wdgfw" (OuterVolumeSpecName: "kube-api-access-wdgfw") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "kube-api-access-wdgfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.542386 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.556449 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7b93b041-3f3f-47ba-a9d4-d09de1b326dc" (UID: "7b93b041-3f3f-47ba-a9d4-d09de1b326dc"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.611437 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/2.log" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.615778 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovn-acl-logging/0.log" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.616411 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sh5vd_7b93b041-3f3f-47ba-a9d4-d09de1b326dc/ovn-controller/0.log" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.616767 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" exitCode=0 Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.616800 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" exitCode=0 Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.616812 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" exitCode=0 Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.616823 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" exitCode=0 Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.616835 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" containerID="8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" exitCode=0 Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.616899 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617030 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617106 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617126 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617175 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617190 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617200 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5vd" event={"ID":"7b93b041-3f3f-47ba-a9d4-d09de1b326dc","Type":"ContainerDied","Data":"000add16087c8ea520a073a5f979801fd8148c67d21762dc28e5704d78c31411"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617214 4845 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617228 4845 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617235 4845 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1"} Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.617278 4845 scope.go:117] "RemoveContainer" containerID="e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628105 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-cni-bin\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628171 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovn-node-metrics-cert\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628209 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-systemd-units\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628236 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-run-ovn-kubernetes\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628263 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-ovn\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628285 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628360 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-node-log\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628384 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-systemd\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628404 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-var-lib-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628422 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-kubelet\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628441 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-log-socket\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628465 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-run-netns\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628484 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovnkube-script-lib\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628507 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-env-overrides\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628531 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9gz\" (UniqueName: \"kubernetes.io/projected/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-kube-api-access-ch9gz\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628556 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-cni-netd\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628585 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628619 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-slash\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628641 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovnkube-config\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628666 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-etc-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628724 4845 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628737 4845 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628750 4845 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628762 4845 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628773 4845 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628783 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdgfw\" (UniqueName: \"kubernetes.io/projected/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-kube-api-access-wdgfw\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628794 4845 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628805 4845 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628816 4845 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628828 4845 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628841 4845 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628850 4845 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628863 4845 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628872 4845 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.628902 4845 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b93b041-3f3f-47ba-a9d4-d09de1b326dc-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.643496 4845 scope.go:117] "RemoveContainer" containerID="02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.671556 4845 scope.go:117] "RemoveContainer" containerID="409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.690331 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sh5vd"] Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.691732 4845 scope.go:117] "RemoveContainer" containerID="74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.724731 4845 scope.go:117] "RemoveContainer" containerID="79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.725366 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sh5vd"] Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729624 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-run-netns\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729663 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovnkube-script-lib\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729697 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-env-overrides\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729713 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch9gz\" (UniqueName: \"kubernetes.io/projected/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-kube-api-access-ch9gz\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729730 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-cni-netd\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729770 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729778 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-run-netns\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729811 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-slash\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.729933 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovnkube-config\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730013 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-etc-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730089 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-cni-bin\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730129 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovn-node-metrics-cert\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730188 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-systemd-units\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730253 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-run-ovn-kubernetes\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730303 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-ovn\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730384 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730471 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-node-log\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730501 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-systemd\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730559 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-kubelet\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730583 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-var-lib-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730624 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-log-socket\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730913 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-etc-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.730965 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-cni-bin\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731156 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovnkube-script-lib\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731442 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-systemd-units\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731474 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-cni-netd\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731585 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-kubelet\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731612 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-node-log\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731629 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731693 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-var-lib-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731721 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-slash\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731744 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-host-run-ovn-kubernetes\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731788 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-openvswitch\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731958 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-log-socket\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.731982 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-systemd\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.732310 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovnkube-config\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.732351 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-run-ovn\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.732357 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-env-overrides\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.736462 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-ovn-node-metrics-cert\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.761675 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch9gz\" (UniqueName: \"kubernetes.io/projected/d82eed2b-a080-46e8-86d6-9fd5fc6ee721-kube-api-access-ch9gz\") pod \"ovnkube-node-zgmmp\" (UID: \"d82eed2b-a080-46e8-86d6-9fd5fc6ee721\") " pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.775091 4845 scope.go:117] "RemoveContainer" containerID="8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.807114 4845 scope.go:117] "RemoveContainer" containerID="2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.864582 4845 scope.go:117] "RemoveContainer" containerID="36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.890962 4845 scope.go:117] "RemoveContainer" containerID="9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.908268 4845 scope.go:117] "RemoveContainer" containerID="e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.908684 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.915269 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": container with ID starting with e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0 not found: ID does not exist" containerID="e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.915324 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0"} err="failed to get container status \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": rpc error: code = NotFound desc = could not find container \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": container with ID starting with e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.915355 4845 scope.go:117] "RemoveContainer" containerID="02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.917449 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": container with ID starting with 02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1 not found: ID does not exist" containerID="02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.917480 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1"} err="failed to get container status \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": rpc error: code = NotFound desc = could not find container \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": container with ID starting with 02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.917498 4845 scope.go:117] "RemoveContainer" containerID="409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.918126 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": container with ID starting with 409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658 not found: ID does not exist" containerID="409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.918174 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658"} err="failed to get container status \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": rpc error: code = NotFound desc = could not find container \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": container with ID starting with 409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.918208 4845 scope.go:117] "RemoveContainer" containerID="74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.918586 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": container with ID starting with 74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397 not found: ID does not exist" containerID="74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.918614 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397"} err="failed to get container status \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": rpc error: code = NotFound desc = could not find container \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": container with ID starting with 74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.918630 4845 scope.go:117] "RemoveContainer" containerID="79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.918950 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": container with ID starting with 79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73 not found: ID does not exist" containerID="79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.918975 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73"} err="failed to get container status \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": rpc error: code = NotFound desc = could not find container \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": container with ID starting with 79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.918990 4845 scope.go:117] "RemoveContainer" containerID="8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.919245 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": container with ID starting with 8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7 not found: ID does not exist" containerID="8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.919266 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7"} err="failed to get container status \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": rpc error: code = NotFound desc = could not find container \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": container with ID starting with 8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.919280 4845 scope.go:117] "RemoveContainer" containerID="2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.919647 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": container with ID starting with 2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3 not found: ID does not exist" containerID="2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.919682 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3"} err="failed to get container status \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": rpc error: code = NotFound desc = could not find container \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": container with ID starting with 2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.919700 4845 scope.go:117] "RemoveContainer" containerID="36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.921557 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": container with ID starting with 36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb not found: ID does not exist" containerID="36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.921581 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb"} err="failed to get container status \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": rpc error: code = NotFound desc = could not find container \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": container with ID starting with 36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.921597 4845 scope.go:117] "RemoveContainer" containerID="9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1" Feb 02 10:41:41 crc kubenswrapper[4845]: E0202 10:41:41.921875 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": container with ID starting with 9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1 not found: ID does not exist" containerID="9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.921918 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1"} err="failed to get container status \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": rpc error: code = NotFound desc = could not find container \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": container with ID starting with 9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.921931 4845 scope.go:117] "RemoveContainer" containerID="e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.922329 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0"} err="failed to get container status \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": rpc error: code = NotFound desc = could not find container \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": container with ID starting with e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.922352 4845 scope.go:117] "RemoveContainer" containerID="02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.922615 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1"} err="failed to get container status \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": rpc error: code = NotFound desc = could not find container \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": container with ID starting with 02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.922647 4845 scope.go:117] "RemoveContainer" containerID="409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.922911 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658"} err="failed to get container status \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": rpc error: code = NotFound desc = could not find container \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": container with ID starting with 409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.922931 4845 scope.go:117] "RemoveContainer" containerID="74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.923143 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397"} err="failed to get container status \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": rpc error: code = NotFound desc = could not find container \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": container with ID starting with 74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.923170 4845 scope.go:117] "RemoveContainer" containerID="79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.923498 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73"} err="failed to get container status \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": rpc error: code = NotFound desc = could not find container \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": container with ID starting with 79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.923517 4845 scope.go:117] "RemoveContainer" containerID="8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.923866 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7"} err="failed to get container status \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": rpc error: code = NotFound desc = could not find container \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": container with ID starting with 8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.923900 4845 scope.go:117] "RemoveContainer" containerID="2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.924117 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3"} err="failed to get container status \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": rpc error: code = NotFound desc = could not find container \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": container with ID starting with 2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.924137 4845 scope.go:117] "RemoveContainer" containerID="36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.924967 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb"} err="failed to get container status \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": rpc error: code = NotFound desc = could not find container \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": container with ID starting with 36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.924999 4845 scope.go:117] "RemoveContainer" containerID="9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.925220 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1"} err="failed to get container status \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": rpc error: code = NotFound desc = could not find container \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": container with ID starting with 9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.925252 4845 scope.go:117] "RemoveContainer" containerID="e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.925489 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0"} err="failed to get container status \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": rpc error: code = NotFound desc = could not find container \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": container with ID starting with e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.925509 4845 scope.go:117] "RemoveContainer" containerID="02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.925700 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1"} err="failed to get container status \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": rpc error: code = NotFound desc = could not find container \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": container with ID starting with 02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.925721 4845 scope.go:117] "RemoveContainer" containerID="409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.925949 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658"} err="failed to get container status \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": rpc error: code = NotFound desc = could not find container \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": container with ID starting with 409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.925967 4845 scope.go:117] "RemoveContainer" containerID="74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.926336 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397"} err="failed to get container status \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": rpc error: code = NotFound desc = could not find container \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": container with ID starting with 74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.926400 4845 scope.go:117] "RemoveContainer" containerID="79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.926777 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73"} err="failed to get container status \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": rpc error: code = NotFound desc = could not find container \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": container with ID starting with 79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.926802 4845 scope.go:117] "RemoveContainer" containerID="8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.927211 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7"} err="failed to get container status \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": rpc error: code = NotFound desc = could not find container \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": container with ID starting with 8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.927232 4845 scope.go:117] "RemoveContainer" containerID="2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.931312 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3"} err="failed to get container status \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": rpc error: code = NotFound desc = could not find container \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": container with ID starting with 2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.931344 4845 scope.go:117] "RemoveContainer" containerID="36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.931910 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb"} err="failed to get container status \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": rpc error: code = NotFound desc = could not find container \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": container with ID starting with 36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.931929 4845 scope.go:117] "RemoveContainer" containerID="9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.936726 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1"} err="failed to get container status \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": rpc error: code = NotFound desc = could not find container \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": container with ID starting with 9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.936765 4845 scope.go:117] "RemoveContainer" containerID="e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.937330 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0"} err="failed to get container status \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": rpc error: code = NotFound desc = could not find container \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": container with ID starting with e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.937387 4845 scope.go:117] "RemoveContainer" containerID="02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.937778 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1"} err="failed to get container status \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": rpc error: code = NotFound desc = could not find container \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": container with ID starting with 02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.937829 4845 scope.go:117] "RemoveContainer" containerID="409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.942389 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658"} err="failed to get container status \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": rpc error: code = NotFound desc = could not find container \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": container with ID starting with 409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.942519 4845 scope.go:117] "RemoveContainer" containerID="74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.946117 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397"} err="failed to get container status \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": rpc error: code = NotFound desc = could not find container \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": container with ID starting with 74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.946218 4845 scope.go:117] "RemoveContainer" containerID="79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.948020 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73"} err="failed to get container status \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": rpc error: code = NotFound desc = could not find container \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": container with ID starting with 79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.948119 4845 scope.go:117] "RemoveContainer" containerID="8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.951556 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7"} err="failed to get container status \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": rpc error: code = NotFound desc = could not find container \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": container with ID starting with 8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.951651 4845 scope.go:117] "RemoveContainer" containerID="2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.953327 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3"} err="failed to get container status \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": rpc error: code = NotFound desc = could not find container \"2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3\": container with ID starting with 2892ff3b75844390937d52755502815dfadf7751fd15ada1334e1baee4346cf3 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.953416 4845 scope.go:117] "RemoveContainer" containerID="36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.954118 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb"} err="failed to get container status \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": rpc error: code = NotFound desc = could not find container \"36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb\": container with ID starting with 36972d6206bd60feed48b6f9dbccaf4f05d994ea7bdd06336644428c25b2c3fb not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.954215 4845 scope.go:117] "RemoveContainer" containerID="9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.954519 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1"} err="failed to get container status \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": rpc error: code = NotFound desc = could not find container \"9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1\": container with ID starting with 9e9066826bc923668c4573d1732c3256ef6cfc03494d40eb8499775ab5658ae1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.954610 4845 scope.go:117] "RemoveContainer" containerID="e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.954982 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0"} err="failed to get container status \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": rpc error: code = NotFound desc = could not find container \"e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0\": container with ID starting with e82fe127f51288ef1b1322c9522ff12a2264fb3652d9a6840e4c07868aeae7b0 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.955074 4845 scope.go:117] "RemoveContainer" containerID="02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.955391 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1"} err="failed to get container status \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": rpc error: code = NotFound desc = could not find container \"02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1\": container with ID starting with 02fa53ffb692bec680de2862735d2af4b4cbfe1ae757617ba50bfcf582de44d1 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.955483 4845 scope.go:117] "RemoveContainer" containerID="409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.956058 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658"} err="failed to get container status \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": rpc error: code = NotFound desc = could not find container \"409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658\": container with ID starting with 409f9ac81e645675e175199a51cb31223b386d8e2d985f1a59a1de038b18f658 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.956146 4845 scope.go:117] "RemoveContainer" containerID="74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.956387 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397"} err="failed to get container status \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": rpc error: code = NotFound desc = could not find container \"74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397\": container with ID starting with 74f739ca28bbe3712cf8954196b49eeef72d7361f88fb5f5d686cc61cdaf3397 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.956465 4845 scope.go:117] "RemoveContainer" containerID="79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.956679 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73"} err="failed to get container status \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": rpc error: code = NotFound desc = could not find container \"79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73\": container with ID starting with 79fc5049b517c8f029fff8f78cd5164fe30480307280fa2f2e206e773a2f3f73 not found: ID does not exist" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.956749 4845 scope.go:117] "RemoveContainer" containerID="8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7" Feb 02 10:41:41 crc kubenswrapper[4845]: I0202 10:41:41.956985 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7"} err="failed to get container status \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": rpc error: code = NotFound desc = could not find container \"8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7\": container with ID starting with 8943a1af4d3e926e148f5107a7cbbf5b672b1ce0ef2812f9a5c4245b307721c7 not found: ID does not exist" Feb 02 10:41:42 crc kubenswrapper[4845]: I0202 10:41:42.623389 4845 generic.go:334] "Generic (PLEG): container finished" podID="d82eed2b-a080-46e8-86d6-9fd5fc6ee721" containerID="6a8aaa7e5b8588aa254156f4f7413bbe2803ade42897c279b1aeadb4a06b61ae" exitCode=0 Feb 02 10:41:42 crc kubenswrapper[4845]: I0202 10:41:42.624523 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerDied","Data":"6a8aaa7e5b8588aa254156f4f7413bbe2803ade42897c279b1aeadb4a06b61ae"} Feb 02 10:41:42 crc kubenswrapper[4845]: I0202 10:41:42.624630 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"9d078a3ac63aeece2981887726cdf89367123881d3727a55f36dbc54153449ac"} Feb 02 10:41:43 crc kubenswrapper[4845]: I0202 10:41:43.635647 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"ef26a1e78c7d193b66cf7baf398381e3eb1b7be1c29fa1744759e6ab00ca8cc3"} Feb 02 10:41:43 crc kubenswrapper[4845]: I0202 10:41:43.635986 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"71a141c05c90158c519861b67deaafa2a2ebb438350eb7ee3b765ef91a7a89a2"} Feb 02 10:41:43 crc kubenswrapper[4845]: I0202 10:41:43.635997 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"50b65571723e5a30dd1e0df31816ba4954e2ae7c6bc7d5ed56b63d90cb6dbd92"} Feb 02 10:41:43 crc kubenswrapper[4845]: I0202 10:41:43.721032 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b93b041-3f3f-47ba-a9d4-d09de1b326dc" path="/var/lib/kubelet/pods/7b93b041-3f3f-47ba-a9d4-d09de1b326dc/volumes" Feb 02 10:41:44 crc kubenswrapper[4845]: I0202 10:41:44.643757 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"a1ee2b8828c8638e143260a38393f97f9f953373ad44d6f6e65ed1217a84a65d"} Feb 02 10:41:44 crc kubenswrapper[4845]: I0202 10:41:44.643796 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"80288c7c9b9ca70cf6f5dbe5776865cec991199f5502d433db89c9713ed05e4e"} Feb 02 10:41:45 crc kubenswrapper[4845]: I0202 10:41:45.653251 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"85f1a736b84e9f8d81cf1e8b5aeb90b3c0aaa644b5bd01656d95a11622037880"} Feb 02 10:41:46 crc kubenswrapper[4845]: I0202 10:41:46.237991 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:41:46 crc kubenswrapper[4845]: I0202 10:41:46.238405 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:41:46 crc kubenswrapper[4845]: I0202 10:41:46.238584 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:41:46 crc kubenswrapper[4845]: I0202 10:41:46.239596 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"511b5a9de737657a9a1ff84c736b95abf52206e96ffdc8cf5decfdca7aa28582"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:41:46 crc kubenswrapper[4845]: I0202 10:41:46.239944 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://511b5a9de737657a9a1ff84c736b95abf52206e96ffdc8cf5decfdca7aa28582" gracePeriod=600 Feb 02 10:41:46 crc kubenswrapper[4845]: I0202 10:41:46.672791 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="511b5a9de737657a9a1ff84c736b95abf52206e96ffdc8cf5decfdca7aa28582" exitCode=0 Feb 02 10:41:46 crc kubenswrapper[4845]: I0202 10:41:46.672874 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"511b5a9de737657a9a1ff84c736b95abf52206e96ffdc8cf5decfdca7aa28582"} Feb 02 10:41:46 crc kubenswrapper[4845]: I0202 10:41:46.673039 4845 scope.go:117] "RemoveContainer" containerID="df9230c12c17f28801d9b1be21f07e2881dfba8fde329097a5e90d09e1d981f3" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.192747 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27"] Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.194135 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.197637 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.197979 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.198014 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-6vc4w" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.209597 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jvdx\" (UniqueName: \"kubernetes.io/projected/308cfce2-8d47-45e6-9153-a8cd92a8758b-kube-api-access-8jvdx\") pod \"obo-prometheus-operator-68bc856cb9-rmj27\" (UID: \"308cfce2-8d47-45e6-9153-a8cd92a8758b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.311110 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jvdx\" (UniqueName: \"kubernetes.io/projected/308cfce2-8d47-45e6-9153-a8cd92a8758b-kube-api-access-8jvdx\") pod \"obo-prometheus-operator-68bc856cb9-rmj27\" (UID: \"308cfce2-8d47-45e6-9153-a8cd92a8758b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.326766 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466"] Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.327481 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.335786 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qprcl" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.335829 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.335933 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch"] Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.336869 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.354748 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jvdx\" (UniqueName: \"kubernetes.io/projected/308cfce2-8d47-45e6-9153-a8cd92a8758b-kube-api-access-8jvdx\") pod \"obo-prometheus-operator-68bc856cb9-rmj27\" (UID: \"308cfce2-8d47-45e6-9153-a8cd92a8758b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.467321 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466\" (UID: \"631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.467680 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d289096b-a35d-4a41-90a3-cab735629cc7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch\" (UID: \"d289096b-a35d-4a41-90a3-cab735629cc7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.467800 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466\" (UID: \"631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.467910 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d289096b-a35d-4a41-90a3-cab735629cc7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch\" (UID: \"d289096b-a35d-4a41-90a3-cab735629cc7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.511107 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.526053 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5wvdz"] Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.526825 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.529276 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.531960 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-wdzpz" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.569103 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466\" (UID: \"631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.569581 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d289096b-a35d-4a41-90a3-cab735629cc7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch\" (UID: \"d289096b-a35d-4a41-90a3-cab735629cc7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.569618 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466\" (UID: \"631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.569644 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d289096b-a35d-4a41-90a3-cab735629cc7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch\" (UID: \"d289096b-a35d-4a41-90a3-cab735629cc7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.583057 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466\" (UID: \"631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.583072 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d289096b-a35d-4a41-90a3-cab735629cc7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch\" (UID: \"d289096b-a35d-4a41-90a3-cab735629cc7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.583298 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d289096b-a35d-4a41-90a3-cab735629cc7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch\" (UID: \"d289096b-a35d-4a41-90a3-cab735629cc7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.589893 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(9b5e30249634e55d12f4d94f18c5c9c46180df0bc10fd9fad30ffec1f337db76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.589990 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(9b5e30249634e55d12f4d94f18c5c9c46180df0bc10fd9fad30ffec1f337db76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.590024 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(9b5e30249634e55d12f4d94f18c5c9c46180df0bc10fd9fad30ffec1f337db76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.590075 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators(308cfce2-8d47-45e6-9153-a8cd92a8758b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators(308cfce2-8d47-45e6-9153-a8cd92a8758b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(9b5e30249634e55d12f4d94f18c5c9c46180df0bc10fd9fad30ffec1f337db76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" podUID="308cfce2-8d47-45e6-9153-a8cd92a8758b" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.591264 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466\" (UID: \"631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.642163 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.669905 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413_0(95c7df11e34e9a6e0e8fc61e4d80c6184f98bece68c2dfc0ae8f4ae1a2580f62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.669982 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413_0(95c7df11e34e9a6e0e8fc61e4d80c6184f98bece68c2dfc0ae8f4ae1a2580f62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.670002 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413_0(95c7df11e34e9a6e0e8fc61e4d80c6184f98bece68c2dfc0ae8f4ae1a2580f62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.670056 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators(631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators(631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413_0(95c7df11e34e9a6e0e8fc61e4d80c6184f98bece68c2dfc0ae8f4ae1a2580f62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" podUID="631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.670707 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqlwr\" (UniqueName: \"kubernetes.io/projected/b75686c5-933f-4f8d-bf87-0229795baf12-kube-api-access-lqlwr\") pod \"observability-operator-59bdc8b94-5wvdz\" (UID: \"b75686c5-933f-4f8d-bf87-0229795baf12\") " pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.670851 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b75686c5-933f-4f8d-bf87-0229795baf12-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5wvdz\" (UID: \"b75686c5-933f-4f8d-bf87-0229795baf12\") " pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.681989 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"faf8e85b5f2efdb91a1dcdfb7d3d9ff033956bb15922ba78cb0d90c0661d34f8"} Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.683549 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.689526 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"7507381aef96d67058b511c06ee19db89edff3956425194ae767cae7ad73b260"} Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.714039 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(eaec681cfc3d1f2d2fc8e93809fef2e1bdb458129f9bc7b5f52388a83a08b06f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.714105 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(eaec681cfc3d1f2d2fc8e93809fef2e1bdb458129f9bc7b5f52388a83a08b06f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.714128 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(eaec681cfc3d1f2d2fc8e93809fef2e1bdb458129f9bc7b5f52388a83a08b06f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.714167 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators(d289096b-a35d-4a41-90a3-cab735629cc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators(d289096b-a35d-4a41-90a3-cab735629cc7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(eaec681cfc3d1f2d2fc8e93809fef2e1bdb458129f9bc7b5f52388a83a08b06f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" podUID="d289096b-a35d-4a41-90a3-cab735629cc7" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.746148 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8hhqb"] Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.747194 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.751406 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-qdwh7" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.772357 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqlwr\" (UniqueName: \"kubernetes.io/projected/b75686c5-933f-4f8d-bf87-0229795baf12-kube-api-access-lqlwr\") pod \"observability-operator-59bdc8b94-5wvdz\" (UID: \"b75686c5-933f-4f8d-bf87-0229795baf12\") " pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.772610 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b75686c5-933f-4f8d-bf87-0229795baf12-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5wvdz\" (UID: \"b75686c5-933f-4f8d-bf87-0229795baf12\") " pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.776242 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b75686c5-933f-4f8d-bf87-0229795baf12-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5wvdz\" (UID: \"b75686c5-933f-4f8d-bf87-0229795baf12\") " pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.788034 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqlwr\" (UniqueName: \"kubernetes.io/projected/b75686c5-933f-4f8d-bf87-0229795baf12-kube-api-access-lqlwr\") pod \"observability-operator-59bdc8b94-5wvdz\" (UID: \"b75686c5-933f-4f8d-bf87-0229795baf12\") " pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.874304 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8hhqb\" (UID: \"1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec\") " pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.874582 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5dn\" (UniqueName: \"kubernetes.io/projected/1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec-kube-api-access-jl5dn\") pod \"perses-operator-5bf474d74f-8hhqb\" (UID: \"1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec\") " pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.923507 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.952382 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(c7045a081315465faad1c050ecc58431fdc571ada0833d62e56d2b646cbb6e18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.952492 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(c7045a081315465faad1c050ecc58431fdc571ada0833d62e56d2b646cbb6e18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.952513 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(c7045a081315465faad1c050ecc58431fdc571ada0833d62e56d2b646cbb6e18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:47 crc kubenswrapper[4845]: E0202 10:41:47.952565 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-5wvdz_openshift-operators(b75686c5-933f-4f8d-bf87-0229795baf12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-5wvdz_openshift-operators(b75686c5-933f-4f8d-bf87-0229795baf12)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(c7045a081315465faad1c050ecc58431fdc571ada0833d62e56d2b646cbb6e18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" podUID="b75686c5-933f-4f8d-bf87-0229795baf12" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.975787 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5dn\" (UniqueName: \"kubernetes.io/projected/1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec-kube-api-access-jl5dn\") pod \"perses-operator-5bf474d74f-8hhqb\" (UID: \"1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec\") " pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.975874 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8hhqb\" (UID: \"1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec\") " pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.976746 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8hhqb\" (UID: \"1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec\") " pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:47 crc kubenswrapper[4845]: I0202 10:41:47.992221 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5dn\" (UniqueName: \"kubernetes.io/projected/1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec-kube-api-access-jl5dn\") pod \"perses-operator-5bf474d74f-8hhqb\" (UID: \"1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec\") " pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:48 crc kubenswrapper[4845]: I0202 10:41:48.070574 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:48 crc kubenswrapper[4845]: E0202 10:41:48.094642 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(c5f1701ce4247eb02ffb62860f88a816cd7f9eee82d508c49315ccc464e483b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:48 crc kubenswrapper[4845]: E0202 10:41:48.094759 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(c5f1701ce4247eb02ffb62860f88a816cd7f9eee82d508c49315ccc464e483b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:48 crc kubenswrapper[4845]: E0202 10:41:48.094840 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(c5f1701ce4247eb02ffb62860f88a816cd7f9eee82d508c49315ccc464e483b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:48 crc kubenswrapper[4845]: E0202 10:41:48.094956 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-8hhqb_openshift-operators(1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-8hhqb_openshift-operators(1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(c5f1701ce4247eb02ffb62860f88a816cd7f9eee82d508c49315ccc464e483b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" podUID="1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec" Feb 02 10:41:49 crc kubenswrapper[4845]: I0202 10:41:49.705450 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" event={"ID":"d82eed2b-a080-46e8-86d6-9fd5fc6ee721","Type":"ContainerStarted","Data":"128480b8c77f1769c8bc0e796797eacce6318444eb182f13681aba2cdcc29a1f"} Feb 02 10:41:49 crc kubenswrapper[4845]: I0202 10:41:49.706577 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:49 crc kubenswrapper[4845]: I0202 10:41:49.739931 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" podStartSLOduration=8.739911933 podStartE2EDuration="8.739911933s" podCreationTimestamp="2026-02-02 10:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:49.737599027 +0000 UTC m=+590.829000497" watchObservedRunningTime="2026-02-02 10:41:49.739911933 +0000 UTC m=+590.831313423" Feb 02 10:41:49 crc kubenswrapper[4845]: I0202 10:41:49.798423 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.472238 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27"] Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.472377 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.472811 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.478329 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8hhqb"] Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.478460 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.478977 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.510644 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch"] Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.510785 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.512560 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.522862 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5wvdz"] Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.523004 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.523495 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.557214 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466"] Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.557351 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.557805 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.586088 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(68996c8508724fbd9016da0eef3c9d68b8b95ad32e83069c6c57fe1b6827f31b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.586179 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(68996c8508724fbd9016da0eef3c9d68b8b95ad32e83069c6c57fe1b6827f31b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.586209 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(68996c8508724fbd9016da0eef3c9d68b8b95ad32e83069c6c57fe1b6827f31b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.586263 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators(308cfce2-8d47-45e6-9153-a8cd92a8758b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators(308cfce2-8d47-45e6-9153-a8cd92a8758b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(68996c8508724fbd9016da0eef3c9d68b8b95ad32e83069c6c57fe1b6827f31b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" podUID="308cfce2-8d47-45e6-9153-a8cd92a8758b" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.594386 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(8184fb1fbd2d417cd0328ac22b647b2d4bc1a4fa90f381b6a8890b694ae33c33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.594449 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(8184fb1fbd2d417cd0328ac22b647b2d4bc1a4fa90f381b6a8890b694ae33c33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.594468 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(8184fb1fbd2d417cd0328ac22b647b2d4bc1a4fa90f381b6a8890b694ae33c33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.594507 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-8hhqb_openshift-operators(1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-8hhqb_openshift-operators(1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(8184fb1fbd2d417cd0328ac22b647b2d4bc1a4fa90f381b6a8890b694ae33c33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" podUID="1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.627399 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(60064200f0e8007bebbd5fa26eb450fcc94b3711a5203d7da8fd50da9e98ce30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.627561 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(60064200f0e8007bebbd5fa26eb450fcc94b3711a5203d7da8fd50da9e98ce30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.627587 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(60064200f0e8007bebbd5fa26eb450fcc94b3711a5203d7da8fd50da9e98ce30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.627684 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators(d289096b-a35d-4a41-90a3-cab735629cc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators(d289096b-a35d-4a41-90a3-cab735629cc7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(60064200f0e8007bebbd5fa26eb450fcc94b3711a5203d7da8fd50da9e98ce30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" podUID="d289096b-a35d-4a41-90a3-cab735629cc7" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.638654 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(71fd7d6d6f430db09f5704c4547c6a1c7123cafbcf322cdbb9a3a82557e58ef2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.638733 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(71fd7d6d6f430db09f5704c4547c6a1c7123cafbcf322cdbb9a3a82557e58ef2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.638756 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(71fd7d6d6f430db09f5704c4547c6a1c7123cafbcf322cdbb9a3a82557e58ef2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.638803 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-5wvdz_openshift-operators(b75686c5-933f-4f8d-bf87-0229795baf12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-5wvdz_openshift-operators(b75686c5-933f-4f8d-bf87-0229795baf12)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(71fd7d6d6f430db09f5704c4547c6a1c7123cafbcf322cdbb9a3a82557e58ef2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" podUID="b75686c5-933f-4f8d-bf87-0229795baf12" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.646000 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413_0(e67c8daca8aee1c3b4da005245135281896d4a42360c3395f237374992bf3702): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.646066 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413_0(e67c8daca8aee1c3b4da005245135281896d4a42360c3395f237374992bf3702): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.646089 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413_0(e67c8daca8aee1c3b4da005245135281896d4a42360c3395f237374992bf3702): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:41:50 crc kubenswrapper[4845]: E0202 10:41:50.646129 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators(631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators(631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_openshift-operators_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413_0(e67c8daca8aee1c3b4da005245135281896d4a42360c3395f237374992bf3702): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" podUID="631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.710301 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.710399 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:50 crc kubenswrapper[4845]: I0202 10:41:50.756451 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:41:51 crc kubenswrapper[4845]: I0202 10:41:51.712536 4845 scope.go:117] "RemoveContainer" containerID="276b266e719606f3b154e5d01310e24320f7c03059107da4aad492d36a95867b" Feb 02 10:41:51 crc kubenswrapper[4845]: E0202 10:41:51.713388 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kzwst_openshift-multus(310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3)\"" pod="openshift-multus/multus-kzwst" podUID="310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3" Feb 02 10:42:01 crc kubenswrapper[4845]: I0202 10:42:01.713485 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:42:01 crc kubenswrapper[4845]: I0202 10:42:01.715472 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:42:01 crc kubenswrapper[4845]: E0202 10:42:01.742948 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(33b261f34c416aaf3e4f914aede8f77e42a99e5f63df355c9532db4b9e51a7d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:42:01 crc kubenswrapper[4845]: E0202 10:42:01.743076 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(33b261f34c416aaf3e4f914aede8f77e42a99e5f63df355c9532db4b9e51a7d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:42:01 crc kubenswrapper[4845]: E0202 10:42:01.743109 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(33b261f34c416aaf3e4f914aede8f77e42a99e5f63df355c9532db4b9e51a7d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:42:01 crc kubenswrapper[4845]: E0202 10:42:01.743184 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-8hhqb_openshift-operators(1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-8hhqb_openshift-operators(1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8hhqb_openshift-operators_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec_0(33b261f34c416aaf3e4f914aede8f77e42a99e5f63df355c9532db4b9e51a7d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" podUID="1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec" Feb 02 10:42:02 crc kubenswrapper[4845]: I0202 10:42:02.711642 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:42:02 crc kubenswrapper[4845]: I0202 10:42:02.714474 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:42:02 crc kubenswrapper[4845]: I0202 10:42:02.714763 4845 scope.go:117] "RemoveContainer" containerID="276b266e719606f3b154e5d01310e24320f7c03059107da4aad492d36a95867b" Feb 02 10:42:02 crc kubenswrapper[4845]: I0202 10:42:02.715815 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:42:02 crc kubenswrapper[4845]: I0202 10:42:02.718432 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:42:02 crc kubenswrapper[4845]: E0202 10:42:02.796000 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(b89f8f8709d56d3be6ff6b6645b0e14829f11b1502e9f4cf869cef757ace04de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:42:02 crc kubenswrapper[4845]: E0202 10:42:02.796098 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(b89f8f8709d56d3be6ff6b6645b0e14829f11b1502e9f4cf869cef757ace04de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:42:02 crc kubenswrapper[4845]: E0202 10:42:02.796131 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(b89f8f8709d56d3be6ff6b6645b0e14829f11b1502e9f4cf869cef757ace04de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:42:02 crc kubenswrapper[4845]: E0202 10:42:02.796194 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators(d289096b-a35d-4a41-90a3-cab735629cc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators(d289096b-a35d-4a41-90a3-cab735629cc7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_openshift-operators_d289096b-a35d-4a41-90a3-cab735629cc7_0(b89f8f8709d56d3be6ff6b6645b0e14829f11b1502e9f4cf869cef757ace04de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" podUID="d289096b-a35d-4a41-90a3-cab735629cc7" Feb 02 10:42:02 crc kubenswrapper[4845]: E0202 10:42:02.836028 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(285e78de503dbd5bcc7a75f4a7aecc5a78f218771235523977c6c99aa441d169): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:42:02 crc kubenswrapper[4845]: E0202 10:42:02.836102 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(285e78de503dbd5bcc7a75f4a7aecc5a78f218771235523977c6c99aa441d169): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:42:02 crc kubenswrapper[4845]: E0202 10:42:02.836123 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(285e78de503dbd5bcc7a75f4a7aecc5a78f218771235523977c6c99aa441d169): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:42:02 crc kubenswrapper[4845]: E0202 10:42:02.836195 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators(308cfce2-8d47-45e6-9153-a8cd92a8758b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators(308cfce2-8d47-45e6-9153-a8cd92a8758b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-rmj27_openshift-operators_308cfce2-8d47-45e6-9153-a8cd92a8758b_0(285e78de503dbd5bcc7a75f4a7aecc5a78f218771235523977c6c99aa441d169): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" podUID="308cfce2-8d47-45e6-9153-a8cd92a8758b" Feb 02 10:42:03 crc kubenswrapper[4845]: I0202 10:42:03.711959 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:42:03 crc kubenswrapper[4845]: I0202 10:42:03.712855 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:42:03 crc kubenswrapper[4845]: E0202 10:42:03.741029 4845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(b0a95bce9ea9cd5d8076a5251ac1a9abd3b11448e537696e180b7232316cf05b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:42:03 crc kubenswrapper[4845]: E0202 10:42:03.741118 4845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(b0a95bce9ea9cd5d8076a5251ac1a9abd3b11448e537696e180b7232316cf05b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:42:03 crc kubenswrapper[4845]: E0202 10:42:03.741152 4845 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(b0a95bce9ea9cd5d8076a5251ac1a9abd3b11448e537696e180b7232316cf05b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:42:03 crc kubenswrapper[4845]: E0202 10:42:03.741209 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-5wvdz_openshift-operators(b75686c5-933f-4f8d-bf87-0229795baf12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-5wvdz_openshift-operators(b75686c5-933f-4f8d-bf87-0229795baf12)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-5wvdz_openshift-operators_b75686c5-933f-4f8d-bf87-0229795baf12_0(b0a95bce9ea9cd5d8076a5251ac1a9abd3b11448e537696e180b7232316cf05b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" podUID="b75686c5-933f-4f8d-bf87-0229795baf12" Feb 02 10:42:03 crc kubenswrapper[4845]: I0202 10:42:03.793093 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzwst_310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3/kube-multus/2.log" Feb 02 10:42:03 crc kubenswrapper[4845]: I0202 10:42:03.793149 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzwst" event={"ID":"310f06ec-b9c5-40c9-aeb9-a6e4ef5304c3","Type":"ContainerStarted","Data":"2b4a07b1d9171eb411a82de7135210292ed9c2a5fc790f6fd74c2e539f900185"} Feb 02 10:42:04 crc kubenswrapper[4845]: I0202 10:42:04.712446 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:42:04 crc kubenswrapper[4845]: I0202 10:42:04.713482 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" Feb 02 10:42:05 crc kubenswrapper[4845]: I0202 10:42:05.113471 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466"] Feb 02 10:42:05 crc kubenswrapper[4845]: I0202 10:42:05.810826 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" event={"ID":"631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413","Type":"ContainerStarted","Data":"08d226cd3318e0a212080dd634fff2b9f5e2c691b3e1aed077d9bf1414f241b5"} Feb 02 10:42:11 crc kubenswrapper[4845]: I0202 10:42:11.933783 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zgmmp" Feb 02 10:42:12 crc kubenswrapper[4845]: I0202 10:42:12.711737 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:42:12 crc kubenswrapper[4845]: I0202 10:42:12.712176 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:42:13 crc kubenswrapper[4845]: I0202 10:42:13.887825 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8hhqb"] Feb 02 10:42:14 crc kubenswrapper[4845]: I0202 10:42:14.712134 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:42:14 crc kubenswrapper[4845]: I0202 10:42:14.712876 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" Feb 02 10:42:14 crc kubenswrapper[4845]: I0202 10:42:14.898508 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" event={"ID":"1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec","Type":"ContainerStarted","Data":"3afef2be0e2f5a4237a498c299928a513a2258aa23d6c7c000241f200cab02ee"} Feb 02 10:42:14 crc kubenswrapper[4845]: I0202 10:42:14.900024 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" event={"ID":"631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413","Type":"ContainerStarted","Data":"484aaa9298f4a42b09d3f3da27122bd50238e83d649c292b63dcd669d5e1c3f7"} Feb 02 10:42:14 crc kubenswrapper[4845]: I0202 10:42:14.905610 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch"] Feb 02 10:42:14 crc kubenswrapper[4845]: W0202 10:42:14.907298 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd289096b_a35d_4a41_90a3_cab735629cc7.slice/crio-a4328a34ee8d46edafc84e0b5ccff02a1fb90494cbe6f4f5b5d82b1e9a40b538 WatchSource:0}: Error finding container a4328a34ee8d46edafc84e0b5ccff02a1fb90494cbe6f4f5b5d82b1e9a40b538: Status 404 returned error can't find the container with id a4328a34ee8d46edafc84e0b5ccff02a1fb90494cbe6f4f5b5d82b1e9a40b538 Feb 02 10:42:14 crc kubenswrapper[4845]: I0202 10:42:14.935195 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-kx466" podStartSLOduration=18.652121897 podStartE2EDuration="27.93517195s" podCreationTimestamp="2026-02-02 10:41:47 +0000 UTC" firstStartedPulling="2026-02-02 10:42:05.120803312 +0000 UTC m=+606.212204762" lastFinishedPulling="2026-02-02 10:42:14.403853365 +0000 UTC m=+615.495254815" observedRunningTime="2026-02-02 10:42:14.933406699 +0000 UTC m=+616.024808179" watchObservedRunningTime="2026-02-02 10:42:14.93517195 +0000 UTC m=+616.026573410" Feb 02 10:42:15 crc kubenswrapper[4845]: I0202 10:42:15.715449 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:42:15 crc kubenswrapper[4845]: I0202 10:42:15.719447 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:42:15 crc kubenswrapper[4845]: I0202 10:42:15.907933 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" event={"ID":"d289096b-a35d-4a41-90a3-cab735629cc7","Type":"ContainerStarted","Data":"10bc03154c57c5c07d663e35caee8425747853d33b7ceea61037a8ff4ddd9c79"} Feb 02 10:42:15 crc kubenswrapper[4845]: I0202 10:42:15.907986 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" event={"ID":"d289096b-a35d-4a41-90a3-cab735629cc7","Type":"ContainerStarted","Data":"a4328a34ee8d46edafc84e0b5ccff02a1fb90494cbe6f4f5b5d82b1e9a40b538"} Feb 02 10:42:15 crc kubenswrapper[4845]: I0202 10:42:15.934539 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch" podStartSLOduration=28.934519979 podStartE2EDuration="28.934519979s" podCreationTimestamp="2026-02-02 10:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:15.928273532 +0000 UTC m=+617.019674982" watchObservedRunningTime="2026-02-02 10:42:15.934519979 +0000 UTC m=+617.025921429" Feb 02 10:42:16 crc kubenswrapper[4845]: I0202 10:42:16.216429 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5wvdz"] Feb 02 10:42:16 crc kubenswrapper[4845]: W0202 10:42:16.222226 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb75686c5_933f_4f8d_bf87_0229795baf12.slice/crio-b21494f88ed0d601d2f5353d8f0b419da3243e78fbca3176e26c7eaa4a75afef WatchSource:0}: Error finding container b21494f88ed0d601d2f5353d8f0b419da3243e78fbca3176e26c7eaa4a75afef: Status 404 returned error can't find the container with id b21494f88ed0d601d2f5353d8f0b419da3243e78fbca3176e26c7eaa4a75afef Feb 02 10:42:16 crc kubenswrapper[4845]: I0202 10:42:16.915726 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" event={"ID":"b75686c5-933f-4f8d-bf87-0229795baf12","Type":"ContainerStarted","Data":"b21494f88ed0d601d2f5353d8f0b419da3243e78fbca3176e26c7eaa4a75afef"} Feb 02 10:42:17 crc kubenswrapper[4845]: I0202 10:42:17.712160 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:42:17 crc kubenswrapper[4845]: I0202 10:42:17.712896 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" Feb 02 10:42:17 crc kubenswrapper[4845]: I0202 10:42:17.922928 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" event={"ID":"1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec","Type":"ContainerStarted","Data":"634e46c3ef079bbd29e9dec1eb2c2e9c89dd65ebe6e39f1e7b62b411e2e279cd"} Feb 02 10:42:17 crc kubenswrapper[4845]: I0202 10:42:17.923097 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:42:17 crc kubenswrapper[4845]: I0202 10:42:17.940111 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" podStartSLOduration=27.519446516 podStartE2EDuration="30.940090915s" podCreationTimestamp="2026-02-02 10:41:47 +0000 UTC" firstStartedPulling="2026-02-02 10:42:13.896516232 +0000 UTC m=+614.987917682" lastFinishedPulling="2026-02-02 10:42:17.317160631 +0000 UTC m=+618.408562081" observedRunningTime="2026-02-02 10:42:17.937381988 +0000 UTC m=+619.028783468" watchObservedRunningTime="2026-02-02 10:42:17.940090915 +0000 UTC m=+619.031492365" Feb 02 10:42:18 crc kubenswrapper[4845]: I0202 10:42:18.139828 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27"] Feb 02 10:42:18 crc kubenswrapper[4845]: W0202 10:42:18.145380 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod308cfce2_8d47_45e6_9153_a8cd92a8758b.slice/crio-edecbf343eb9762c250cc5d010b3077c47840c0e78d17d4f5bc99657ab64f394 WatchSource:0}: Error finding container edecbf343eb9762c250cc5d010b3077c47840c0e78d17d4f5bc99657ab64f394: Status 404 returned error can't find the container with id edecbf343eb9762c250cc5d010b3077c47840c0e78d17d4f5bc99657ab64f394 Feb 02 10:42:18 crc kubenswrapper[4845]: I0202 10:42:18.929121 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" event={"ID":"308cfce2-8d47-45e6-9153-a8cd92a8758b","Type":"ContainerStarted","Data":"edecbf343eb9762c250cc5d010b3077c47840c0e78d17d4f5bc99657ab64f394"} Feb 02 10:42:24 crc kubenswrapper[4845]: I0202 10:42:24.973662 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" event={"ID":"308cfce2-8d47-45e6-9153-a8cd92a8758b","Type":"ContainerStarted","Data":"05a63d0d9e826be721515bd8c6bac42820418d079e03689b9b9fd6013ab69b6d"} Feb 02 10:42:24 crc kubenswrapper[4845]: I0202 10:42:24.977449 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" event={"ID":"b75686c5-933f-4f8d-bf87-0229795baf12","Type":"ContainerStarted","Data":"56aa998f079128273f0715901559e831d59a0a7f276b1a2e485b4905bf5366b6"} Feb 02 10:42:24 crc kubenswrapper[4845]: I0202 10:42:24.977682 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:42:25 crc kubenswrapper[4845]: I0202 10:42:25.000064 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rmj27" podStartSLOduration=31.93609995 podStartE2EDuration="38.00003884s" podCreationTimestamp="2026-02-02 10:41:47 +0000 UTC" firstStartedPulling="2026-02-02 10:42:18.147583282 +0000 UTC m=+619.238984732" lastFinishedPulling="2026-02-02 10:42:24.211522162 +0000 UTC m=+625.302923622" observedRunningTime="2026-02-02 10:42:24.993065622 +0000 UTC m=+626.084467072" watchObservedRunningTime="2026-02-02 10:42:25.00003884 +0000 UTC m=+626.091440300" Feb 02 10:42:25 crc kubenswrapper[4845]: I0202 10:42:25.019642 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" podStartSLOduration=30.028415541 podStartE2EDuration="38.019624218s" podCreationTimestamp="2026-02-02 10:41:47 +0000 UTC" firstStartedPulling="2026-02-02 10:42:16.22399044 +0000 UTC m=+617.315391890" lastFinishedPulling="2026-02-02 10:42:24.215199117 +0000 UTC m=+625.306600567" observedRunningTime="2026-02-02 10:42:25.016099688 +0000 UTC m=+626.107501138" watchObservedRunningTime="2026-02-02 10:42:25.019624218 +0000 UTC m=+626.111025658" Feb 02 10:42:25 crc kubenswrapper[4845]: I0202 10:42:25.037175 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-5wvdz" Feb 02 10:42:28 crc kubenswrapper[4845]: I0202 10:42:28.085611 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-8hhqb" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.866674 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7p596"] Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.867798 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7p596" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.870223 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.870548 4845 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qghql" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.871168 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.882580 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ltwq9"] Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.883572 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.886138 4845 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wt465" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.897432 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8"] Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.898482 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.902352 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7p596"] Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.903681 4845 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-k7zt6" Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.942287 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8"] Feb 02 10:42:32 crc kubenswrapper[4845]: I0202 10:42:32.953785 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ltwq9"] Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.001559 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z65q6\" (UniqueName: \"kubernetes.io/projected/c1996e72-3bd0-4770-9662-c0c1359d7a8b-kube-api-access-z65q6\") pod \"cert-manager-858654f9db-7p596\" (UID: \"c1996e72-3bd0-4770-9662-c0c1359d7a8b\") " pod="cert-manager/cert-manager-858654f9db-7p596" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.001631 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7tf\" (UniqueName: \"kubernetes.io/projected/7b6c985e-704e-4ff8-b668-d2f4cb218172-kube-api-access-kt7tf\") pod \"cert-manager-webhook-687f57d79b-ltwq9\" (UID: \"7b6c985e-704e-4ff8-b668-d2f4cb218172\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.001690 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zjj\" (UniqueName: \"kubernetes.io/projected/8b99109d-f1ff-4d24-b08a-c317fffd456c-kube-api-access-55zjj\") pod \"cert-manager-cainjector-cf98fcc89-vqsd8\" (UID: \"8b99109d-f1ff-4d24-b08a-c317fffd456c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.103243 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt7tf\" (UniqueName: \"kubernetes.io/projected/7b6c985e-704e-4ff8-b668-d2f4cb218172-kube-api-access-kt7tf\") pod \"cert-manager-webhook-687f57d79b-ltwq9\" (UID: \"7b6c985e-704e-4ff8-b668-d2f4cb218172\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.103327 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zjj\" (UniqueName: \"kubernetes.io/projected/8b99109d-f1ff-4d24-b08a-c317fffd456c-kube-api-access-55zjj\") pod \"cert-manager-cainjector-cf98fcc89-vqsd8\" (UID: \"8b99109d-f1ff-4d24-b08a-c317fffd456c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.103361 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z65q6\" (UniqueName: \"kubernetes.io/projected/c1996e72-3bd0-4770-9662-c0c1359d7a8b-kube-api-access-z65q6\") pod \"cert-manager-858654f9db-7p596\" (UID: \"c1996e72-3bd0-4770-9662-c0c1359d7a8b\") " pod="cert-manager/cert-manager-858654f9db-7p596" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.130796 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt7tf\" (UniqueName: \"kubernetes.io/projected/7b6c985e-704e-4ff8-b668-d2f4cb218172-kube-api-access-kt7tf\") pod \"cert-manager-webhook-687f57d79b-ltwq9\" (UID: \"7b6c985e-704e-4ff8-b668-d2f4cb218172\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.131477 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zjj\" (UniqueName: \"kubernetes.io/projected/8b99109d-f1ff-4d24-b08a-c317fffd456c-kube-api-access-55zjj\") pod \"cert-manager-cainjector-cf98fcc89-vqsd8\" (UID: \"8b99109d-f1ff-4d24-b08a-c317fffd456c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.134542 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z65q6\" (UniqueName: \"kubernetes.io/projected/c1996e72-3bd0-4770-9662-c0c1359d7a8b-kube-api-access-z65q6\") pod \"cert-manager-858654f9db-7p596\" (UID: \"c1996e72-3bd0-4770-9662-c0c1359d7a8b\") " pod="cert-manager/cert-manager-858654f9db-7p596" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.190929 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7p596" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.220176 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.245819 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8" Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.735595 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ltwq9"] Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.824395 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8"] Feb 02 10:42:33 crc kubenswrapper[4845]: W0202 10:42:33.824839 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1996e72_3bd0_4770_9662_c0c1359d7a8b.slice/crio-18c1cc05b7f4a2760b38006e10a59aab789c2cea129f9375b62fcdd3fdb7c587 WatchSource:0}: Error finding container 18c1cc05b7f4a2760b38006e10a59aab789c2cea129f9375b62fcdd3fdb7c587: Status 404 returned error can't find the container with id 18c1cc05b7f4a2760b38006e10a59aab789c2cea129f9375b62fcdd3fdb7c587 Feb 02 10:42:33 crc kubenswrapper[4845]: W0202 10:42:33.828480 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b99109d_f1ff_4d24_b08a_c317fffd456c.slice/crio-aae72d15b12e31cfe6bb5cd99710a1685183bf60aaeffbf924ff4755b050d2bf WatchSource:0}: Error finding container aae72d15b12e31cfe6bb5cd99710a1685183bf60aaeffbf924ff4755b050d2bf: Status 404 returned error can't find the container with id aae72d15b12e31cfe6bb5cd99710a1685183bf60aaeffbf924ff4755b050d2bf Feb 02 10:42:33 crc kubenswrapper[4845]: I0202 10:42:33.835999 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7p596"] Feb 02 10:42:34 crc kubenswrapper[4845]: I0202 10:42:34.039819 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8" event={"ID":"8b99109d-f1ff-4d24-b08a-c317fffd456c","Type":"ContainerStarted","Data":"aae72d15b12e31cfe6bb5cd99710a1685183bf60aaeffbf924ff4755b050d2bf"} Feb 02 10:42:34 crc kubenswrapper[4845]: I0202 10:42:34.041109 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" event={"ID":"7b6c985e-704e-4ff8-b668-d2f4cb218172","Type":"ContainerStarted","Data":"d907f45c657f2727d993361770ae30682ba0ee0cff6f4f92419fcd1327262b86"} Feb 02 10:42:34 crc kubenswrapper[4845]: I0202 10:42:34.042209 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7p596" event={"ID":"c1996e72-3bd0-4770-9662-c0c1359d7a8b","Type":"ContainerStarted","Data":"18c1cc05b7f4a2760b38006e10a59aab789c2cea129f9375b62fcdd3fdb7c587"} Feb 02 10:42:44 crc kubenswrapper[4845]: I0202 10:42:44.116914 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" event={"ID":"7b6c985e-704e-4ff8-b668-d2f4cb218172","Type":"ContainerStarted","Data":"1b2805dce98489f2e025deaffc94d28553151f450e828ce2cc5c7b43582f7319"} Feb 02 10:42:44 crc kubenswrapper[4845]: I0202 10:42:44.117420 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" Feb 02 10:42:44 crc kubenswrapper[4845]: I0202 10:42:44.118262 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7p596" event={"ID":"c1996e72-3bd0-4770-9662-c0c1359d7a8b","Type":"ContainerStarted","Data":"81cdc807afca4e336148ed9ac4f8d23235229c93406e33df603e4ccd65ae95b0"} Feb 02 10:42:44 crc kubenswrapper[4845]: I0202 10:42:44.119449 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8" event={"ID":"8b99109d-f1ff-4d24-b08a-c317fffd456c","Type":"ContainerStarted","Data":"389ec7a45d6ff07d80dc73e5713795b2d4188dd6b953053b78c77e0fc161b22f"} Feb 02 10:42:44 crc kubenswrapper[4845]: I0202 10:42:44.152736 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" podStartSLOduration=2.035672149 podStartE2EDuration="12.152692014s" podCreationTimestamp="2026-02-02 10:42:32 +0000 UTC" firstStartedPulling="2026-02-02 10:42:33.752620141 +0000 UTC m=+634.844021591" lastFinishedPulling="2026-02-02 10:42:43.869640006 +0000 UTC m=+644.961041456" observedRunningTime="2026-02-02 10:42:44.143266586 +0000 UTC m=+645.234668046" watchObservedRunningTime="2026-02-02 10:42:44.152692014 +0000 UTC m=+645.244093464" Feb 02 10:42:44 crc kubenswrapper[4845]: I0202 10:42:44.175668 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7p596" podStartSLOduration=2.193571675 podStartE2EDuration="12.175650058s" podCreationTimestamp="2026-02-02 10:42:32 +0000 UTC" firstStartedPulling="2026-02-02 10:42:33.827000779 +0000 UTC m=+634.918402229" lastFinishedPulling="2026-02-02 10:42:43.809079162 +0000 UTC m=+644.900480612" observedRunningTime="2026-02-02 10:42:44.167792334 +0000 UTC m=+645.259193784" watchObservedRunningTime="2026-02-02 10:42:44.175650058 +0000 UTC m=+645.267051508" Feb 02 10:42:53 crc kubenswrapper[4845]: I0202 10:42:53.224484 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-ltwq9" Feb 02 10:42:53 crc kubenswrapper[4845]: I0202 10:42:53.247244 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vqsd8" podStartSLOduration=11.270911819 podStartE2EDuration="21.247214548s" podCreationTimestamp="2026-02-02 10:42:32 +0000 UTC" firstStartedPulling="2026-02-02 10:42:33.832775143 +0000 UTC m=+634.924176593" lastFinishedPulling="2026-02-02 10:42:43.809077882 +0000 UTC m=+644.900479322" observedRunningTime="2026-02-02 10:42:44.195943035 +0000 UTC m=+645.287344495" watchObservedRunningTime="2026-02-02 10:42:53.247214548 +0000 UTC m=+654.338616038" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.605207 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt"] Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.607619 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.609701 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.655970 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt"] Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.760615 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.760687 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.760802 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnhjd\" (UniqueName: \"kubernetes.io/projected/8ead5170-aa2d-4a22-a528-02edf1375239-kube-api-access-fnhjd\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.862462 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.862576 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.862617 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnhjd\" (UniqueName: \"kubernetes.io/projected/8ead5170-aa2d-4a22-a528-02edf1375239-kube-api-access-fnhjd\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.863557 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.863574 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.885308 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnhjd\" (UniqueName: \"kubernetes.io/projected/8ead5170-aa2d-4a22-a528-02edf1375239-kube-api-access-fnhjd\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:14 crc kubenswrapper[4845]: I0202 10:43:14.931178 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.050333 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz"] Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.051776 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.067120 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz"] Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.171604 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.171710 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9n85\" (UniqueName: \"kubernetes.io/projected/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-kube-api-access-h9n85\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.171743 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.272946 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.273052 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9n85\" (UniqueName: \"kubernetes.io/projected/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-kube-api-access-h9n85\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.273078 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.273758 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.274165 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.289649 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9n85\" (UniqueName: \"kubernetes.io/projected/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-kube-api-access-h9n85\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.369415 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.448514 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt"] Feb 02 10:43:15 crc kubenswrapper[4845]: W0202 10:43:15.460821 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ead5170_aa2d_4a22_a528_02edf1375239.slice/crio-56692ba5dca356c3a1fa0771540cf593326a8fa372bed74e1f1a7c3ae3139db2 WatchSource:0}: Error finding container 56692ba5dca356c3a1fa0771540cf593326a8fa372bed74e1f1a7c3ae3139db2: Status 404 returned error can't find the container with id 56692ba5dca356c3a1fa0771540cf593326a8fa372bed74e1f1a7c3ae3139db2 Feb 02 10:43:15 crc kubenswrapper[4845]: I0202 10:43:15.768594 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz"] Feb 02 10:43:16 crc kubenswrapper[4845]: I0202 10:43:16.318873 4845 generic.go:334] "Generic (PLEG): container finished" podID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerID="08e15f1408179ded73501a7180ee37bd0122da7d1e775c76239ad3d338df7392" exitCode=0 Feb 02 10:43:16 crc kubenswrapper[4845]: I0202 10:43:16.319052 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" event={"ID":"2e68fd72-b961-4a58-9f54-01bd2f6ebd76","Type":"ContainerDied","Data":"08e15f1408179ded73501a7180ee37bd0122da7d1e775c76239ad3d338df7392"} Feb 02 10:43:16 crc kubenswrapper[4845]: I0202 10:43:16.319129 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" event={"ID":"2e68fd72-b961-4a58-9f54-01bd2f6ebd76","Type":"ContainerStarted","Data":"c328959b5704423d66f46d82cd926b32f32ec5337fd4df119b8682189b823366"} Feb 02 10:43:16 crc kubenswrapper[4845]: I0202 10:43:16.327391 4845 generic.go:334] "Generic (PLEG): container finished" podID="8ead5170-aa2d-4a22-a528-02edf1375239" containerID="cf8d0bbf8e7ea2bdf2bafb3c41509dc4a4527f3b417b1f3f58101835dbfaab0f" exitCode=0 Feb 02 10:43:16 crc kubenswrapper[4845]: I0202 10:43:16.327440 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" event={"ID":"8ead5170-aa2d-4a22-a528-02edf1375239","Type":"ContainerDied","Data":"cf8d0bbf8e7ea2bdf2bafb3c41509dc4a4527f3b417b1f3f58101835dbfaab0f"} Feb 02 10:43:16 crc kubenswrapper[4845]: I0202 10:43:16.327471 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" event={"ID":"8ead5170-aa2d-4a22-a528-02edf1375239","Type":"ContainerStarted","Data":"56692ba5dca356c3a1fa0771540cf593326a8fa372bed74e1f1a7c3ae3139db2"} Feb 02 10:43:18 crc kubenswrapper[4845]: I0202 10:43:18.340322 4845 generic.go:334] "Generic (PLEG): container finished" podID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerID="8c67dbf7cfb27dcf9fd8534c99368255fab933e7e5f365f89afec67173dcc887" exitCode=0 Feb 02 10:43:18 crc kubenswrapper[4845]: I0202 10:43:18.340427 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" event={"ID":"2e68fd72-b961-4a58-9f54-01bd2f6ebd76","Type":"ContainerDied","Data":"8c67dbf7cfb27dcf9fd8534c99368255fab933e7e5f365f89afec67173dcc887"} Feb 02 10:43:18 crc kubenswrapper[4845]: I0202 10:43:18.343429 4845 generic.go:334] "Generic (PLEG): container finished" podID="8ead5170-aa2d-4a22-a528-02edf1375239" containerID="4f2619db0a67dd6fd30bb0e621c70150792e47b4de68ff5a9637eff88e5594ef" exitCode=0 Feb 02 10:43:18 crc kubenswrapper[4845]: I0202 10:43:18.343468 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" event={"ID":"8ead5170-aa2d-4a22-a528-02edf1375239","Type":"ContainerDied","Data":"4f2619db0a67dd6fd30bb0e621c70150792e47b4de68ff5a9637eff88e5594ef"} Feb 02 10:43:19 crc kubenswrapper[4845]: I0202 10:43:19.356777 4845 generic.go:334] "Generic (PLEG): container finished" podID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerID="17ba9db62c398a40cf3c0fe7cf89e3c388ae00f4c5d6cd57a195a580a2af57ce" exitCode=0 Feb 02 10:43:19 crc kubenswrapper[4845]: I0202 10:43:19.356840 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" event={"ID":"2e68fd72-b961-4a58-9f54-01bd2f6ebd76","Type":"ContainerDied","Data":"17ba9db62c398a40cf3c0fe7cf89e3c388ae00f4c5d6cd57a195a580a2af57ce"} Feb 02 10:43:19 crc kubenswrapper[4845]: I0202 10:43:19.359254 4845 generic.go:334] "Generic (PLEG): container finished" podID="8ead5170-aa2d-4a22-a528-02edf1375239" containerID="0023190a8d25d3f86dc87050e5358fb264a399bdddaa2a897010bf724797d898" exitCode=0 Feb 02 10:43:19 crc kubenswrapper[4845]: I0202 10:43:19.359327 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" event={"ID":"8ead5170-aa2d-4a22-a528-02edf1375239","Type":"ContainerDied","Data":"0023190a8d25d3f86dc87050e5358fb264a399bdddaa2a897010bf724797d898"} Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.660389 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.667993 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.771939 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-bundle\") pod \"8ead5170-aa2d-4a22-a528-02edf1375239\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.771991 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9n85\" (UniqueName: \"kubernetes.io/projected/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-kube-api-access-h9n85\") pod \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.772024 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-bundle\") pod \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.772074 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-util\") pod \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\" (UID: \"2e68fd72-b961-4a58-9f54-01bd2f6ebd76\") " Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.772119 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnhjd\" (UniqueName: \"kubernetes.io/projected/8ead5170-aa2d-4a22-a528-02edf1375239-kube-api-access-fnhjd\") pod \"8ead5170-aa2d-4a22-a528-02edf1375239\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.772156 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-util\") pod \"8ead5170-aa2d-4a22-a528-02edf1375239\" (UID: \"8ead5170-aa2d-4a22-a528-02edf1375239\") " Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.773172 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-bundle" (OuterVolumeSpecName: "bundle") pod "8ead5170-aa2d-4a22-a528-02edf1375239" (UID: "8ead5170-aa2d-4a22-a528-02edf1375239"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.773170 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-bundle" (OuterVolumeSpecName: "bundle") pod "2e68fd72-b961-4a58-9f54-01bd2f6ebd76" (UID: "2e68fd72-b961-4a58-9f54-01bd2f6ebd76"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.778125 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ead5170-aa2d-4a22-a528-02edf1375239-kube-api-access-fnhjd" (OuterVolumeSpecName: "kube-api-access-fnhjd") pod "8ead5170-aa2d-4a22-a528-02edf1375239" (UID: "8ead5170-aa2d-4a22-a528-02edf1375239"). InnerVolumeSpecName "kube-api-access-fnhjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.786169 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-util" (OuterVolumeSpecName: "util") pod "2e68fd72-b961-4a58-9f54-01bd2f6ebd76" (UID: "2e68fd72-b961-4a58-9f54-01bd2f6ebd76"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.786190 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-kube-api-access-h9n85" (OuterVolumeSpecName: "kube-api-access-h9n85") pod "2e68fd72-b961-4a58-9f54-01bd2f6ebd76" (UID: "2e68fd72-b961-4a58-9f54-01bd2f6ebd76"). InnerVolumeSpecName "kube-api-access-h9n85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.787720 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-util" (OuterVolumeSpecName: "util") pod "8ead5170-aa2d-4a22-a528-02edf1375239" (UID: "8ead5170-aa2d-4a22-a528-02edf1375239"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.874755 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9n85\" (UniqueName: \"kubernetes.io/projected/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-kube-api-access-h9n85\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.874827 4845 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.874859 4845 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e68fd72-b961-4a58-9f54-01bd2f6ebd76-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.874973 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnhjd\" (UniqueName: \"kubernetes.io/projected/8ead5170-aa2d-4a22-a528-02edf1375239-kube-api-access-fnhjd\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.874997 4845 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:20 crc kubenswrapper[4845]: I0202 10:43:20.875018 4845 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ead5170-aa2d-4a22-a528-02edf1375239-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:21 crc kubenswrapper[4845]: I0202 10:43:21.378234 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" event={"ID":"2e68fd72-b961-4a58-9f54-01bd2f6ebd76","Type":"ContainerDied","Data":"c328959b5704423d66f46d82cd926b32f32ec5337fd4df119b8682189b823366"} Feb 02 10:43:21 crc kubenswrapper[4845]: I0202 10:43:21.378266 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz" Feb 02 10:43:21 crc kubenswrapper[4845]: I0202 10:43:21.378284 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c328959b5704423d66f46d82cd926b32f32ec5337fd4df119b8682189b823366" Feb 02 10:43:21 crc kubenswrapper[4845]: I0202 10:43:21.380406 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" event={"ID":"8ead5170-aa2d-4a22-a528-02edf1375239","Type":"ContainerDied","Data":"56692ba5dca356c3a1fa0771540cf593326a8fa372bed74e1f1a7c3ae3139db2"} Feb 02 10:43:21 crc kubenswrapper[4845]: I0202 10:43:21.380446 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56692ba5dca356c3a1fa0771540cf593326a8fa372bed74e1f1a7c3ae3139db2" Feb 02 10:43:21 crc kubenswrapper[4845]: I0202 10:43:21.380488 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.459839 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b"] Feb 02 10:43:31 crc kubenswrapper[4845]: E0202 10:43:31.460632 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ead5170-aa2d-4a22-a528-02edf1375239" containerName="util" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.460644 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ead5170-aa2d-4a22-a528-02edf1375239" containerName="util" Feb 02 10:43:31 crc kubenswrapper[4845]: E0202 10:43:31.460654 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerName="extract" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.460660 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerName="extract" Feb 02 10:43:31 crc kubenswrapper[4845]: E0202 10:43:31.460669 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ead5170-aa2d-4a22-a528-02edf1375239" containerName="extract" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.460674 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ead5170-aa2d-4a22-a528-02edf1375239" containerName="extract" Feb 02 10:43:31 crc kubenswrapper[4845]: E0202 10:43:31.460686 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerName="pull" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.460713 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerName="pull" Feb 02 10:43:31 crc kubenswrapper[4845]: E0202 10:43:31.460725 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ead5170-aa2d-4a22-a528-02edf1375239" containerName="pull" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.460731 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ead5170-aa2d-4a22-a528-02edf1375239" containerName="pull" Feb 02 10:43:31 crc kubenswrapper[4845]: E0202 10:43:31.460744 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerName="util" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.460749 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerName="util" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.460849 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ead5170-aa2d-4a22-a528-02edf1375239" containerName="extract" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.460865 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e68fd72-b961-4a58-9f54-01bd2f6ebd76" containerName="extract" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.461600 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.466535 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-5kb9h" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.467326 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.467546 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.467547 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.467948 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.469008 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.487097 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b"] Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.523405 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-manager-config\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.523489 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-apiservice-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.523517 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.523535 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-webhook-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.523553 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbxnt\" (UniqueName: \"kubernetes.io/projected/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-kube-api-access-bbxnt\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.624462 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-manager-config\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.624553 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-apiservice-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.624584 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.624607 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-webhook-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.624634 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbxnt\" (UniqueName: \"kubernetes.io/projected/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-kube-api-access-bbxnt\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.625922 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-manager-config\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.630545 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.633653 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-apiservice-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.634599 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-webhook-cert\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.652529 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbxnt\" (UniqueName: \"kubernetes.io/projected/2988a5fa-2703-4a60-bcd6-dc81ceea7e1a-kube-api-access-bbxnt\") pod \"loki-operator-controller-manager-b659b8cd7-mwl8b\" (UID: \"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:31 crc kubenswrapper[4845]: I0202 10:43:31.778400 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:32 crc kubenswrapper[4845]: I0202 10:43:32.227542 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b"] Feb 02 10:43:32 crc kubenswrapper[4845]: W0202 10:43:32.230090 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2988a5fa_2703_4a60_bcd6_dc81ceea7e1a.slice/crio-5bfd8190657502ce3817fad07a8e68172ebc266a00f608242c592ff4dd86777b WatchSource:0}: Error finding container 5bfd8190657502ce3817fad07a8e68172ebc266a00f608242c592ff4dd86777b: Status 404 returned error can't find the container with id 5bfd8190657502ce3817fad07a8e68172ebc266a00f608242c592ff4dd86777b Feb 02 10:43:32 crc kubenswrapper[4845]: I0202 10:43:32.444595 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" event={"ID":"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a","Type":"ContainerStarted","Data":"5bfd8190657502ce3817fad07a8e68172ebc266a00f608242c592ff4dd86777b"} Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.048990 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6"] Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.050871 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6" Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.053455 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.053814 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-2jhc9" Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.054401 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.071356 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6"] Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.198038 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj5dd\" (UniqueName: \"kubernetes.io/projected/cb944758-f09b-4486-9f3b-4ef87b53246b-kube-api-access-jj5dd\") pod \"cluster-logging-operator-79cf69ddc8-4pzr6\" (UID: \"cb944758-f09b-4486-9f3b-4ef87b53246b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6" Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.299986 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj5dd\" (UniqueName: \"kubernetes.io/projected/cb944758-f09b-4486-9f3b-4ef87b53246b-kube-api-access-jj5dd\") pod \"cluster-logging-operator-79cf69ddc8-4pzr6\" (UID: \"cb944758-f09b-4486-9f3b-4ef87b53246b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6" Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.330143 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj5dd\" (UniqueName: \"kubernetes.io/projected/cb944758-f09b-4486-9f3b-4ef87b53246b-kube-api-access-jj5dd\") pod \"cluster-logging-operator-79cf69ddc8-4pzr6\" (UID: \"cb944758-f09b-4486-9f3b-4ef87b53246b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6" Feb 02 10:43:36 crc kubenswrapper[4845]: I0202 10:43:36.375975 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6" Feb 02 10:43:37 crc kubenswrapper[4845]: I0202 10:43:37.373911 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6"] Feb 02 10:43:38 crc kubenswrapper[4845]: I0202 10:43:38.483337 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" event={"ID":"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a","Type":"ContainerStarted","Data":"ed3d3cae21e8dac792a54567d397630f3cd37f4717a98acd18cf5208104346c8"} Feb 02 10:43:38 crc kubenswrapper[4845]: I0202 10:43:38.484557 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6" event={"ID":"cb944758-f09b-4486-9f3b-4ef87b53246b","Type":"ContainerStarted","Data":"4aea2474f36641544824e3dac558a997622c039c094b9272ee1c89b62342d8e9"} Feb 02 10:43:46 crc kubenswrapper[4845]: I0202 10:43:46.237756 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:43:46 crc kubenswrapper[4845]: I0202 10:43:46.238349 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:43:46 crc kubenswrapper[4845]: I0202 10:43:46.549597 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" event={"ID":"2988a5fa-2703-4a60-bcd6-dc81ceea7e1a","Type":"ContainerStarted","Data":"55f26179f3e73506c8839663cd83b18c32b1d5b7dbc19547af5f55398856c41b"} Feb 02 10:43:46 crc kubenswrapper[4845]: I0202 10:43:46.549957 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:46 crc kubenswrapper[4845]: I0202 10:43:46.551830 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" Feb 02 10:43:46 crc kubenswrapper[4845]: I0202 10:43:46.552741 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6" event={"ID":"cb944758-f09b-4486-9f3b-4ef87b53246b","Type":"ContainerStarted","Data":"1eec1502739bdc2565db67833ef15c1010efda6659286317c92d7e6c512ff7fd"} Feb 02 10:43:46 crc kubenswrapper[4845]: I0202 10:43:46.581093 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-b659b8cd7-mwl8b" podStartSLOduration=1.648783991 podStartE2EDuration="15.58107582s" podCreationTimestamp="2026-02-02 10:43:31 +0000 UTC" firstStartedPulling="2026-02-02 10:43:32.232984125 +0000 UTC m=+693.324385575" lastFinishedPulling="2026-02-02 10:43:46.165275954 +0000 UTC m=+707.256677404" observedRunningTime="2026-02-02 10:43:46.572660056 +0000 UTC m=+707.664061506" watchObservedRunningTime="2026-02-02 10:43:46.58107582 +0000 UTC m=+707.672477260" Feb 02 10:43:46 crc kubenswrapper[4845]: I0202 10:43:46.619421 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-4pzr6" podStartSLOduration=2.104720488 podStartE2EDuration="10.619405161s" podCreationTimestamp="2026-02-02 10:43:36 +0000 UTC" firstStartedPulling="2026-02-02 10:43:37.633344271 +0000 UTC m=+698.724745721" lastFinishedPulling="2026-02-02 10:43:46.148028944 +0000 UTC m=+707.239430394" observedRunningTime="2026-02-02 10:43:46.615446416 +0000 UTC m=+707.706847866" watchObservedRunningTime="2026-02-02 10:43:46.619405161 +0000 UTC m=+707.710806611" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.657029 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.662126 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.664813 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.664961 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.664993 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.750023 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66f043a3-08d0-477d-b973-942443398b84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66f043a3-08d0-477d-b973-942443398b84\") pod \"minio\" (UID: \"09851333-f877-4094-8451-908fa1abc4a9\") " pod="minio-dev/minio" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.750193 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6kjf\" (UniqueName: \"kubernetes.io/projected/09851333-f877-4094-8451-908fa1abc4a9-kube-api-access-m6kjf\") pod \"minio\" (UID: \"09851333-f877-4094-8451-908fa1abc4a9\") " pod="minio-dev/minio" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.851732 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6kjf\" (UniqueName: \"kubernetes.io/projected/09851333-f877-4094-8451-908fa1abc4a9-kube-api-access-m6kjf\") pod \"minio\" (UID: \"09851333-f877-4094-8451-908fa1abc4a9\") " pod="minio-dev/minio" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.851810 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66f043a3-08d0-477d-b973-942443398b84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66f043a3-08d0-477d-b973-942443398b84\") pod \"minio\" (UID: \"09851333-f877-4094-8451-908fa1abc4a9\") " pod="minio-dev/minio" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.858976 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.859037 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66f043a3-08d0-477d-b973-942443398b84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66f043a3-08d0-477d-b973-942443398b84\") pod \"minio\" (UID: \"09851333-f877-4094-8451-908fa1abc4a9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4b1d0a84b00582161558387a47d62aa534a513d4618a73aac4f7aa998f727f17/globalmount\"" pod="minio-dev/minio" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.871847 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6kjf\" (UniqueName: \"kubernetes.io/projected/09851333-f877-4094-8451-908fa1abc4a9-kube-api-access-m6kjf\") pod \"minio\" (UID: \"09851333-f877-4094-8451-908fa1abc4a9\") " pod="minio-dev/minio" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.889840 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66f043a3-08d0-477d-b973-942443398b84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66f043a3-08d0-477d-b973-942443398b84\") pod \"minio\" (UID: \"09851333-f877-4094-8451-908fa1abc4a9\") " pod="minio-dev/minio" Feb 02 10:43:51 crc kubenswrapper[4845]: I0202 10:43:51.992488 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 02 10:43:52 crc kubenswrapper[4845]: I0202 10:43:52.434158 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 02 10:43:52 crc kubenswrapper[4845]: I0202 10:43:52.593350 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"09851333-f877-4094-8451-908fa1abc4a9","Type":"ContainerStarted","Data":"279fa9017fa643cb20d1bc76003809b3a8bf0d8f551ba032000407ac89e11ce2"} Feb 02 10:43:57 crc kubenswrapper[4845]: I0202 10:43:57.625264 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"09851333-f877-4094-8451-908fa1abc4a9","Type":"ContainerStarted","Data":"870a27a0492f8d5f263f5d7470212eaf1370d2d4884793c941c7334268621208"} Feb 02 10:43:57 crc kubenswrapper[4845]: I0202 10:43:57.647764 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.499983192 podStartE2EDuration="8.647744099s" podCreationTimestamp="2026-02-02 10:43:49 +0000 UTC" firstStartedPulling="2026-02-02 10:43:52.442283368 +0000 UTC m=+713.533684818" lastFinishedPulling="2026-02-02 10:43:56.590044265 +0000 UTC m=+717.681445725" observedRunningTime="2026-02-02 10:43:57.642809357 +0000 UTC m=+718.734210797" watchObservedRunningTime="2026-02-02 10:43:57.647744099 +0000 UTC m=+718.739145559" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.332822 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-847z7"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.335616 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.341560 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.341874 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-nt9sz" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.341944 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.341672 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.342217 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.352219 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-847z7"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.420652 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.420733 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnnv4\" (UniqueName: \"kubernetes.io/projected/4af06166-f541-44e7-8b4b-37e4f39a8729-kube-api-access-vnnv4\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.420759 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4af06166-f541-44e7-8b4b-37e4f39a8729-config\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.420791 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.420827 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.440654 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-sbp94"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.441586 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.444941 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.445114 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.445265 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.467056 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-sbp94"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528184 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl4zb\" (UniqueName: \"kubernetes.io/projected/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-kube-api-access-bl4zb\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528238 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnnv4\" (UniqueName: \"kubernetes.io/projected/4af06166-f541-44e7-8b4b-37e4f39a8729-kube-api-access-vnnv4\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528263 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4af06166-f541-44e7-8b4b-37e4f39a8729-config\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528309 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528333 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528358 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528378 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-config\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528420 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528447 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528514 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-s3\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.528534 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.530000 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4af06166-f541-44e7-8b4b-37e4f39a8729-config\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.530924 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.535041 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.537009 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.541394 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.541757 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.542353 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.544746 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4af06166-f541-44e7-8b4b-37e4f39a8729-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.556328 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.572934 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnnv4\" (UniqueName: \"kubernetes.io/projected/4af06166-f541-44e7-8b4b-37e4f39a8729-kube-api-access-vnnv4\") pod \"logging-loki-distributor-5f678c8dd6-847z7\" (UID: \"4af06166-f541-44e7-8b4b-37e4f39a8729\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.630844 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.630964 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.631047 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-config\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.631115 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-s3\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.631146 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.632186 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl4zb\" (UniqueName: \"kubernetes.io/projected/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-kube-api-access-bl4zb\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.632243 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.632287 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.632323 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtks\" (UniqueName: \"kubernetes.io/projected/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-kube-api-access-qvtks\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.632398 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.632449 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-config\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.634073 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-config\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.634795 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.650315 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.677609 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.678490 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-s3\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.679266 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.679836 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.685816 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.693113 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.693321 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.693439 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.694495 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.694539 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.694618 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.697182 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.701045 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl4zb\" (UniqueName: \"kubernetes.io/projected/796c275c-0c9b-4b2e-ba0f-7fbeb645028a-kube-api-access-bl4zb\") pod \"logging-loki-querier-76788598db-sbp94\" (UID: \"796c275c-0c9b-4b2e-ba0f-7fbeb645028a\") " pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.701078 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-48tlt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.712008 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.719052 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt"] Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.733781 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-lokistack-gateway\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.734086 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tls-secret\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.734195 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-rbac\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.734987 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735098 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tenants\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735187 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735286 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqt2q\" (UniqueName: \"kubernetes.io/projected/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-kube-api-access-wqt2q\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735387 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tls-secret\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735490 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735601 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw5l4\" (UniqueName: \"kubernetes.io/projected/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-kube-api-access-gw5l4\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735698 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735794 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.735909 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tenants\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.736024 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.736129 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.736247 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-config\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.736339 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.736455 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-rbac\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.736577 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-lokistack-gateway\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.736733 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.736874 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtks\" (UniqueName: \"kubernetes.io/projected/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-kube-api-access-qvtks\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.740749 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.741798 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-config\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.742074 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.751356 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.758860 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtks\" (UniqueName: \"kubernetes.io/projected/27a684fe-6402-4a0d-ab7c-e5c4eab14a64-kube-api-access-qvtks\") pod \"logging-loki-query-frontend-69d9546745-dz4l8\" (UID: \"27a684fe-6402-4a0d-ab7c-e5c4eab14a64\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.761762 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.839617 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw5l4\" (UniqueName: \"kubernetes.io/projected/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-kube-api-access-gw5l4\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.839664 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.839710 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tenants\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.839742 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.839778 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.839827 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-rbac\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.839871 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-lokistack-gateway\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840000 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-lokistack-gateway\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840032 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tls-secret\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840052 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-rbac\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840072 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840117 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tenants\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840139 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840161 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqt2q\" (UniqueName: \"kubernetes.io/projected/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-kube-api-access-wqt2q\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840202 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tls-secret\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.840223 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.841137 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.842378 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-lokistack-gateway\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: E0202 10:44:02.843090 4845 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 02 10:44:02 crc kubenswrapper[4845]: E0202 10:44:02.843147 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tls-secret podName:2b18d0a9-d2cc-4d0b-9ede-a78da13ac929 nodeName:}" failed. No retries permitted until 2026-02-02 10:44:03.343129245 +0000 UTC m=+724.434530765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tls-secret") pod "logging-loki-gateway-cf45dcc8c-vr5gw" (UID: "2b18d0a9-d2cc-4d0b-9ede-a78da13ac929") : secret "logging-loki-gateway-http" not found Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.843369 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.844878 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-lokistack-gateway\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: E0202 10:44:02.844989 4845 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 02 10:44:02 crc kubenswrapper[4845]: E0202 10:44:02.845053 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tls-secret podName:1a4ec7d2-3bae-4f70-9a46-e90b067a0518 nodeName:}" failed. No retries permitted until 2026-02-02 10:44:03.3450201 +0000 UTC m=+724.436421550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tls-secret") pod "logging-loki-gateway-cf45dcc8c-wn9nt" (UID: "1a4ec7d2-3bae-4f70-9a46-e90b067a0518") : secret "logging-loki-gateway-http" not found Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.845346 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-rbac\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.845369 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tenants\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.846295 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.846465 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-rbac\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.847615 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.849771 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tenants\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.850782 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.864547 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.866841 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqt2q\" (UniqueName: \"kubernetes.io/projected/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-kube-api-access-wqt2q\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.869747 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw5l4\" (UniqueName: \"kubernetes.io/projected/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-kube-api-access-gw5l4\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:02 crc kubenswrapper[4845]: I0202 10:44:02.983608 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.204074 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-sbp94"] Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.274483 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-847z7"] Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.378119 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tls-secret\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.378182 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tls-secret\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.384646 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1a4ec7d2-3bae-4f70-9a46-e90b067a0518-tls-secret\") pod \"logging-loki-gateway-cf45dcc8c-wn9nt\" (UID: \"1a4ec7d2-3bae-4f70-9a46-e90b067a0518\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.386142 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/2b18d0a9-d2cc-4d0b-9ede-a78da13ac929-tls-secret\") pod \"logging-loki-gateway-cf45dcc8c-vr5gw\" (UID: \"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929\") " pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.442112 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.444694 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.448240 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.448374 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.467300 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.484192 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8"] Feb 02 10:44:03 crc kubenswrapper[4845]: W0202 10:44:03.487105 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27a684fe_6402_4a0d_ab7c_e5c4eab14a64.slice/crio-9de01e1076e7dcb39b360a42f3a67c269f397ebd39929f72b5292e9adab0394c WatchSource:0}: Error finding container 9de01e1076e7dcb39b360a42f3a67c269f397ebd39929f72b5292e9adab0394c: Status 404 returned error can't find the container with id 9de01e1076e7dcb39b360a42f3a67c269f397ebd39929f72b5292e9adab0394c Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.514625 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.515782 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.517588 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.520865 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.530184 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.580779 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-365e64a3-5a62-4e52-986f-ab8e64490fb9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365e64a3-5a62-4e52-986f-ab8e64490fb9\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.580845 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.580875 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.580923 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.580950 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.580968 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2753f72d-7aa0-4324-baa7-579d7b46eb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2753f72d-7aa0-4324-baa7-579d7b46eb15\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.580996 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f889290-f739-444c-a278-254f68d9d886-config\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.581017 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-722bd17b-12e2-4e74-a109-fc7e8a9dd2f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-722bd17b-12e2-4e74-a109-fc7e8a9dd2f6\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.581036 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.581093 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.581131 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.581159 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5q9\" (UniqueName: \"kubernetes.io/projected/2d889e99-8118-4f52-ab20-b69a55bec079-kube-api-access-6x5q9\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.581179 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f889290-f739-444c-a278-254f68d9d886-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.581200 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwqm\" (UniqueName: \"kubernetes.io/projected/1f889290-f739-444c-a278-254f68d9d886-kube-api-access-jlwqm\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.581219 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d889e99-8118-4f52-ab20-b69a55bec079-config\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.584996 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.585965 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.588046 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.589105 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.599811 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.606983 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.640879 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682368 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh8dt\" (UniqueName: \"kubernetes.io/projected/10b5b71f-47de-4ca2-9133-254552173c73-kube-api-access-nh8dt\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682428 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682466 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682488 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682506 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682531 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682551 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5q9\" (UniqueName: \"kubernetes.io/projected/2d889e99-8118-4f52-ab20-b69a55bec079-kube-api-access-6x5q9\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682569 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f889290-f739-444c-a278-254f68d9d886-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682595 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e49ca23-3578-47d2-a1ae-ee6ec97e5c08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e49ca23-3578-47d2-a1ae-ee6ec97e5c08\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682614 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwqm\" (UniqueName: \"kubernetes.io/projected/1f889290-f739-444c-a278-254f68d9d886-kube-api-access-jlwqm\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682634 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682659 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682676 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682696 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682712 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682728 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2753f72d-7aa0-4324-baa7-579d7b46eb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2753f72d-7aa0-4324-baa7-579d7b46eb15\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682751 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f889290-f739-444c-a278-254f68d9d886-config\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682771 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-722bd17b-12e2-4e74-a109-fc7e8a9dd2f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-722bd17b-12e2-4e74-a109-fc7e8a9dd2f6\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682796 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b5b71f-47de-4ca2-9133-254552173c73-config\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682829 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d889e99-8118-4f52-ab20-b69a55bec079-config\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682851 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-365e64a3-5a62-4e52-986f-ab8e64490fb9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365e64a3-5a62-4e52-986f-ab8e64490fb9\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.682868 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.683613 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.684955 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f889290-f739-444c-a278-254f68d9d886-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.688140 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.689697 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f889290-f739-444c-a278-254f68d9d886-config\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.691371 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.692418 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d889e99-8118-4f52-ab20-b69a55bec079-config\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.693436 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.695016 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.695040 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-722bd17b-12e2-4e74-a109-fc7e8a9dd2f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-722bd17b-12e2-4e74-a109-fc7e8a9dd2f6\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a953406181df73c1ca047b8acbd804f2e868561bc8ffa02202afc7ae6c7ed2a8/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.698186 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.698208 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2753f72d-7aa0-4324-baa7-579d7b46eb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2753f72d-7aa0-4324-baa7-579d7b46eb15\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/31a7571ca6c4152e78090cb6fe4ef838eea0184dcab8df284fca036afc1747d3/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.699635 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.711860 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5q9\" (UniqueName: \"kubernetes.io/projected/2d889e99-8118-4f52-ab20-b69a55bec079-kube-api-access-6x5q9\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.713369 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1f889290-f739-444c-a278-254f68d9d886-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.714170 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.714199 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-365e64a3-5a62-4e52-986f-ab8e64490fb9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365e64a3-5a62-4e52-986f-ab8e64490fb9\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b46ddeb29837386912cb1ef2e6544d28525a72d4688d1a2a19b2c6658c304d02/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.714287 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d889e99-8118-4f52-ab20-b69a55bec079-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.722502 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwqm\" (UniqueName: \"kubernetes.io/projected/1f889290-f739-444c-a278-254f68d9d886-kube-api-access-jlwqm\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.725983 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" event={"ID":"27a684fe-6402-4a0d-ab7c-e5c4eab14a64","Type":"ContainerStarted","Data":"9de01e1076e7dcb39b360a42f3a67c269f397ebd39929f72b5292e9adab0394c"} Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.726025 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-sbp94" event={"ID":"796c275c-0c9b-4b2e-ba0f-7fbeb645028a","Type":"ContainerStarted","Data":"cbc4209c03b20422f2d8c6ab4573e6bf63c5b82f6778343a4460ff443dba53ba"} Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.727169 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" event={"ID":"4af06166-f541-44e7-8b4b-37e4f39a8729","Type":"ContainerStarted","Data":"7671610f17bfc58e7d18806a1287ca970ef8eecb05b3291f7fd7dce378ce7247"} Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.737095 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-722bd17b-12e2-4e74-a109-fc7e8a9dd2f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-722bd17b-12e2-4e74-a109-fc7e8a9dd2f6\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.748621 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2753f72d-7aa0-4324-baa7-579d7b46eb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2753f72d-7aa0-4324-baa7-579d7b46eb15\") pod \"logging-loki-ingester-0\" (UID: \"2d889e99-8118-4f52-ab20-b69a55bec079\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.768794 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-365e64a3-5a62-4e52-986f-ab8e64490fb9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-365e64a3-5a62-4e52-986f-ab8e64490fb9\") pod \"logging-loki-compactor-0\" (UID: \"1f889290-f739-444c-a278-254f68d9d886\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.769453 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.784403 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.784456 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh8dt\" (UniqueName: \"kubernetes.io/projected/10b5b71f-47de-4ca2-9133-254552173c73-kube-api-access-nh8dt\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.784483 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.784537 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.784586 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e49ca23-3578-47d2-a1ae-ee6ec97e5c08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e49ca23-3578-47d2-a1ae-ee6ec97e5c08\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.784617 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.784678 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b5b71f-47de-4ca2-9133-254552173c73-config\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.786434 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b5b71f-47de-4ca2-9133-254552173c73-config\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.788007 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.790144 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.790176 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e49ca23-3578-47d2-a1ae-ee6ec97e5c08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e49ca23-3578-47d2-a1ae-ee6ec97e5c08\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f4d77672c07f1c6dc1c2107fe162f508c8a1c0da8ffd3b4ec81f8850e0496143/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.790446 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.790666 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.791716 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/10b5b71f-47de-4ca2-9133-254552173c73-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.804807 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh8dt\" (UniqueName: \"kubernetes.io/projected/10b5b71f-47de-4ca2-9133-254552173c73-kube-api-access-nh8dt\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.819722 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e49ca23-3578-47d2-a1ae-ee6ec97e5c08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e49ca23-3578-47d2-a1ae-ee6ec97e5c08\") pod \"logging-loki-index-gateway-0\" (UID: \"10b5b71f-47de-4ca2-9133-254552173c73\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.839100 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:03 crc kubenswrapper[4845]: I0202 10:44:03.907220 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.047635 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw"] Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.118255 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt"] Feb 02 10:44:04 crc kubenswrapper[4845]: W0202 10:44:04.120984 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a4ec7d2_3bae_4f70_9a46_e90b067a0518.slice/crio-4a8edaf48ab7d00ef01502d4b29ee2f1a98c351828e1a83aba3c4328a0965efc WatchSource:0}: Error finding container 4a8edaf48ab7d00ef01502d4b29ee2f1a98c351828e1a83aba3c4328a0965efc: Status 404 returned error can't find the container with id 4a8edaf48ab7d00ef01502d4b29ee2f1a98c351828e1a83aba3c4328a0965efc Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.255113 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 02 10:44:04 crc kubenswrapper[4845]: W0202 10:44:04.269948 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d889e99_8118_4f52_ab20_b69a55bec079.slice/crio-8dc65cd44955b04d286338f1906f764c6cc0cf521b0af541aee00492c63ce25f WatchSource:0}: Error finding container 8dc65cd44955b04d286338f1906f764c6cc0cf521b0af541aee00492c63ce25f: Status 404 returned error can't find the container with id 8dc65cd44955b04d286338f1906f764c6cc0cf521b0af541aee00492c63ce25f Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.327194 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 02 10:44:04 crc kubenswrapper[4845]: W0202 10:44:04.328279 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f889290_f739_444c_a278_254f68d9d886.slice/crio-2eb128eb19f75ed3af633bf1d9a28ba239efd309f0949b4fc12d102cd18b623b WatchSource:0}: Error finding container 2eb128eb19f75ed3af633bf1d9a28ba239efd309f0949b4fc12d102cd18b623b: Status 404 returned error can't find the container with id 2eb128eb19f75ed3af633bf1d9a28ba239efd309f0949b4fc12d102cd18b623b Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.388844 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 02 10:44:04 crc kubenswrapper[4845]: W0202 10:44:04.389251 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10b5b71f_47de_4ca2_9133_254552173c73.slice/crio-6621bab08e80f97a2ab7a39da0a2d0718bccc2e28df57c5c270af058d1867031 WatchSource:0}: Error finding container 6621bab08e80f97a2ab7a39da0a2d0718bccc2e28df57c5c270af058d1867031: Status 404 returned error can't find the container with id 6621bab08e80f97a2ab7a39da0a2d0718bccc2e28df57c5c270af058d1867031 Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.733863 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"1f889290-f739-444c-a278-254f68d9d886","Type":"ContainerStarted","Data":"2eb128eb19f75ed3af633bf1d9a28ba239efd309f0949b4fc12d102cd18b623b"} Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.735242 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"2d889e99-8118-4f52-ab20-b69a55bec079","Type":"ContainerStarted","Data":"8dc65cd44955b04d286338f1906f764c6cc0cf521b0af541aee00492c63ce25f"} Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.736378 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" event={"ID":"1a4ec7d2-3bae-4f70-9a46-e90b067a0518","Type":"ContainerStarted","Data":"4a8edaf48ab7d00ef01502d4b29ee2f1a98c351828e1a83aba3c4328a0965efc"} Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.737571 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"10b5b71f-47de-4ca2-9133-254552173c73","Type":"ContainerStarted","Data":"6621bab08e80f97a2ab7a39da0a2d0718bccc2e28df57c5c270af058d1867031"} Feb 02 10:44:04 crc kubenswrapper[4845]: I0202 10:44:04.739106 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" event={"ID":"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929","Type":"ContainerStarted","Data":"6664fdb51e9f6a8b391911c6c7d62987a130b0a6db4a43a9ecda5af88cb31a75"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.796809 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"10b5b71f-47de-4ca2-9133-254552173c73","Type":"ContainerStarted","Data":"78687d78e3128838ea4b33d9d6c0480812586a8dda1fb0e47f202f6181ff4482"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.798098 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.799623 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" event={"ID":"27a684fe-6402-4a0d-ab7c-e5c4eab14a64","Type":"ContainerStarted","Data":"9e9cf2537eb999394a49df9adfcbd67effcac750eebcfc8159d0ed7634816b11"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.800029 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.801413 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" event={"ID":"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929","Type":"ContainerStarted","Data":"f25968fd3ac0ec38b9e0068b09c40f923f915c065d0ceb0b769995a2864f808b"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.802974 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-sbp94" event={"ID":"796c275c-0c9b-4b2e-ba0f-7fbeb645028a","Type":"ContainerStarted","Data":"b294f02fdcea07f8d56ef83c4d09d05f3b673932a5dc905ba73b25323a1cf900"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.803501 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.805254 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" event={"ID":"4af06166-f541-44e7-8b4b-37e4f39a8729","Type":"ContainerStarted","Data":"8c3cf11d52f709e9ffb9c631880d793d912342bc18f98575e29699dbd166a68e"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.805612 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.807637 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"1f889290-f739-444c-a278-254f68d9d886","Type":"ContainerStarted","Data":"e4773bff38cfe56687cab44f15a138662c8df95a8ab1fe431416a4e630a3d43c"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.808044 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.810743 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"2d889e99-8118-4f52-ab20-b69a55bec079","Type":"ContainerStarted","Data":"268b8309eaae83fd84768af1f8ff9df6c24ac610dcb9cb3d979577890a74fb11"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.811219 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.813186 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" event={"ID":"1a4ec7d2-3bae-4f70-9a46-e90b067a0518","Type":"ContainerStarted","Data":"b79cc812663c2471f6b6c4042318a01a929fbf597d53f35745d47d6dfeb5bce6"} Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.822426 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.44483239 podStartE2EDuration="6.822402577s" podCreationTimestamp="2026-02-02 10:44:02 +0000 UTC" firstStartedPulling="2026-02-02 10:44:04.391334568 +0000 UTC m=+725.482736018" lastFinishedPulling="2026-02-02 10:44:07.768904755 +0000 UTC m=+728.860306205" observedRunningTime="2026-02-02 10:44:08.822097078 +0000 UTC m=+729.913498548" watchObservedRunningTime="2026-02-02 10:44:08.822402577 +0000 UTC m=+729.913804027" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.846706 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-sbp94" podStartSLOduration=2.254099474 podStartE2EDuration="6.846687279s" podCreationTimestamp="2026-02-02 10:44:02 +0000 UTC" firstStartedPulling="2026-02-02 10:44:03.249092082 +0000 UTC m=+724.340493522" lastFinishedPulling="2026-02-02 10:44:07.841679877 +0000 UTC m=+728.933081327" observedRunningTime="2026-02-02 10:44:08.839616824 +0000 UTC m=+729.931018274" watchObservedRunningTime="2026-02-02 10:44:08.846687279 +0000 UTC m=+729.938088729" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.871261 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.345944374 podStartE2EDuration="6.871237248s" podCreationTimestamp="2026-02-02 10:44:02 +0000 UTC" firstStartedPulling="2026-02-02 10:44:04.330418729 +0000 UTC m=+725.421820179" lastFinishedPulling="2026-02-02 10:44:07.855711603 +0000 UTC m=+728.947113053" observedRunningTime="2026-02-02 10:44:08.861387193 +0000 UTC m=+729.952788643" watchObservedRunningTime="2026-02-02 10:44:08.871237248 +0000 UTC m=+729.962638698" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.892306 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" podStartSLOduration=2.352461276 podStartE2EDuration="6.892288306s" podCreationTimestamp="2026-02-02 10:44:02 +0000 UTC" firstStartedPulling="2026-02-02 10:44:03.288595214 +0000 UTC m=+724.379996664" lastFinishedPulling="2026-02-02 10:44:07.828422244 +0000 UTC m=+728.919823694" observedRunningTime="2026-02-02 10:44:08.888838916 +0000 UTC m=+729.980240366" watchObservedRunningTime="2026-02-02 10:44:08.892288306 +0000 UTC m=+729.983689756" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.915166 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" podStartSLOduration=2.585845027 podStartE2EDuration="6.915148316s" podCreationTimestamp="2026-02-02 10:44:02 +0000 UTC" firstStartedPulling="2026-02-02 10:44:03.488757446 +0000 UTC m=+724.580158896" lastFinishedPulling="2026-02-02 10:44:07.818060735 +0000 UTC m=+728.909462185" observedRunningTime="2026-02-02 10:44:08.908981558 +0000 UTC m=+730.000383018" watchObservedRunningTime="2026-02-02 10:44:08.915148316 +0000 UTC m=+730.006549766" Feb 02 10:44:08 crc kubenswrapper[4845]: I0202 10:44:08.930171 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.362876124 podStartE2EDuration="6.93015154s" podCreationTimestamp="2026-02-02 10:44:02 +0000 UTC" firstStartedPulling="2026-02-02 10:44:04.272681921 +0000 UTC m=+725.364083381" lastFinishedPulling="2026-02-02 10:44:07.839957347 +0000 UTC m=+728.931358797" observedRunningTime="2026-02-02 10:44:08.925047022 +0000 UTC m=+730.016448472" watchObservedRunningTime="2026-02-02 10:44:08.93015154 +0000 UTC m=+730.021552990" Feb 02 10:44:10 crc kubenswrapper[4845]: I0202 10:44:10.828008 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" event={"ID":"2b18d0a9-d2cc-4d0b-9ede-a78da13ac929","Type":"ContainerStarted","Data":"0ae8b619b97254e1988c8e40e4fe484e25c2f48bb2c59373b65e42fb11d7ee91"} Feb 02 10:44:10 crc kubenswrapper[4845]: I0202 10:44:10.828802 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:10 crc kubenswrapper[4845]: I0202 10:44:10.830449 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" event={"ID":"1a4ec7d2-3bae-4f70-9a46-e90b067a0518","Type":"ContainerStarted","Data":"e28f81a10502d763f12a8c74c9cd0fcef2297c34733c92185f483974f167657d"} Feb 02 10:44:10 crc kubenswrapper[4845]: I0202 10:44:10.837766 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:10 crc kubenswrapper[4845]: I0202 10:44:10.853099 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" podStartSLOduration=2.699734746 podStartE2EDuration="8.853076236s" podCreationTimestamp="2026-02-02 10:44:02 +0000 UTC" firstStartedPulling="2026-02-02 10:44:04.066276668 +0000 UTC m=+725.157678118" lastFinishedPulling="2026-02-02 10:44:10.219618158 +0000 UTC m=+731.311019608" observedRunningTime="2026-02-02 10:44:10.847748933 +0000 UTC m=+731.939150373" watchObservedRunningTime="2026-02-02 10:44:10.853076236 +0000 UTC m=+731.944477686" Feb 02 10:44:10 crc kubenswrapper[4845]: I0202 10:44:10.870397 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" podStartSLOduration=2.778160792 podStartE2EDuration="8.870379096s" podCreationTimestamp="2026-02-02 10:44:02 +0000 UTC" firstStartedPulling="2026-02-02 10:44:04.123905823 +0000 UTC m=+725.215307273" lastFinishedPulling="2026-02-02 10:44:10.216124127 +0000 UTC m=+731.307525577" observedRunningTime="2026-02-02 10:44:10.86530193 +0000 UTC m=+731.956703380" watchObservedRunningTime="2026-02-02 10:44:10.870379096 +0000 UTC m=+731.961780546" Feb 02 10:44:11 crc kubenswrapper[4845]: I0202 10:44:11.836685 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:11 crc kubenswrapper[4845]: I0202 10:44:11.837153 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:11 crc kubenswrapper[4845]: I0202 10:44:11.837195 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:11 crc kubenswrapper[4845]: I0202 10:44:11.845661 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:11 crc kubenswrapper[4845]: I0202 10:44:11.847997 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-vr5gw" Feb 02 10:44:11 crc kubenswrapper[4845]: I0202 10:44:11.848709 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-cf45dcc8c-wn9nt" Feb 02 10:44:16 crc kubenswrapper[4845]: I0202 10:44:16.237318 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:44:16 crc kubenswrapper[4845]: I0202 10:44:16.237388 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:44:23 crc kubenswrapper[4845]: I0202 10:44:23.776019 4845 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 02 10:44:23 crc kubenswrapper[4845]: I0202 10:44:23.777651 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2d889e99-8118-4f52-ab20-b69a55bec079" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:44:23 crc kubenswrapper[4845]: I0202 10:44:23.846768 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 02 10:44:23 crc kubenswrapper[4845]: I0202 10:44:23.914161 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 10:44:32 crc kubenswrapper[4845]: I0202 10:44:32.690260 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-847z7" Feb 02 10:44:32 crc kubenswrapper[4845]: I0202 10:44:32.767327 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-sbp94" Feb 02 10:44:32 crc kubenswrapper[4845]: I0202 10:44:32.994166 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-dz4l8" Feb 02 10:44:33 crc kubenswrapper[4845]: I0202 10:44:33.780252 4845 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 02 10:44:33 crc kubenswrapper[4845]: I0202 10:44:33.780319 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2d889e99-8118-4f52-ab20-b69a55bec079" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:44:34 crc kubenswrapper[4845]: I0202 10:44:34.072840 4845 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:44:43 crc kubenswrapper[4845]: I0202 10:44:43.775221 4845 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 02 10:44:43 crc kubenswrapper[4845]: I0202 10:44:43.775772 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2d889e99-8118-4f52-ab20-b69a55bec079" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:44:46 crc kubenswrapper[4845]: I0202 10:44:46.237820 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:44:46 crc kubenswrapper[4845]: I0202 10:44:46.238184 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:44:46 crc kubenswrapper[4845]: I0202 10:44:46.238230 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:44:46 crc kubenswrapper[4845]: I0202 10:44:46.238830 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"faf8e85b5f2efdb91a1dcdfb7d3d9ff033956bb15922ba78cb0d90c0661d34f8"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:44:46 crc kubenswrapper[4845]: I0202 10:44:46.238873 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://faf8e85b5f2efdb91a1dcdfb7d3d9ff033956bb15922ba78cb0d90c0661d34f8" gracePeriod=600 Feb 02 10:44:47 crc kubenswrapper[4845]: I0202 10:44:47.112720 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="faf8e85b5f2efdb91a1dcdfb7d3d9ff033956bb15922ba78cb0d90c0661d34f8" exitCode=0 Feb 02 10:44:47 crc kubenswrapper[4845]: I0202 10:44:47.112781 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"faf8e85b5f2efdb91a1dcdfb7d3d9ff033956bb15922ba78cb0d90c0661d34f8"} Feb 02 10:44:47 crc kubenswrapper[4845]: I0202 10:44:47.113418 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"5c8a61ef5e1d6c97c545382d55b8a80c690bc952b158b0bc2a66b1f6b33d1ffd"} Feb 02 10:44:47 crc kubenswrapper[4845]: I0202 10:44:47.113455 4845 scope.go:117] "RemoveContainer" containerID="511b5a9de737657a9a1ff84c736b95abf52206e96ffdc8cf5decfdca7aa28582" Feb 02 10:44:53 crc kubenswrapper[4845]: I0202 10:44:53.775579 4845 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 02 10:44:53 crc kubenswrapper[4845]: I0202 10:44:53.776453 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2d889e99-8118-4f52-ab20-b69a55bec079" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.214654 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq"] Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.215981 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.218088 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.218209 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.222726 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq"] Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.407940 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6af5c06e-cf07-4f85-97e9-6b93ec03281c-config-volume\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.407994 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6af5c06e-cf07-4f85-97e9-6b93ec03281c-secret-volume\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.408262 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvk9\" (UniqueName: \"kubernetes.io/projected/6af5c06e-cf07-4f85-97e9-6b93ec03281c-kube-api-access-qdvk9\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.509565 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvk9\" (UniqueName: \"kubernetes.io/projected/6af5c06e-cf07-4f85-97e9-6b93ec03281c-kube-api-access-qdvk9\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.509655 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6af5c06e-cf07-4f85-97e9-6b93ec03281c-config-volume\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.509698 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6af5c06e-cf07-4f85-97e9-6b93ec03281c-secret-volume\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.511643 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6af5c06e-cf07-4f85-97e9-6b93ec03281c-config-volume\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.516216 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6af5c06e-cf07-4f85-97e9-6b93ec03281c-secret-volume\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.529439 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvk9\" (UniqueName: \"kubernetes.io/projected/6af5c06e-cf07-4f85-97e9-6b93ec03281c-kube-api-access-qdvk9\") pod \"collect-profiles-29500485-wl2wq\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.547237 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:00 crc kubenswrapper[4845]: I0202 10:45:00.963064 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq"] Feb 02 10:45:01 crc kubenswrapper[4845]: I0202 10:45:01.224428 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" event={"ID":"6af5c06e-cf07-4f85-97e9-6b93ec03281c","Type":"ContainerStarted","Data":"a18e0c7c2dae09d3f5627d9a9acf7414e19ce7a97f56c94017a2bc4812f89130"} Feb 02 10:45:01 crc kubenswrapper[4845]: I0202 10:45:01.224470 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" event={"ID":"6af5c06e-cf07-4f85-97e9-6b93ec03281c","Type":"ContainerStarted","Data":"bc665438186f6e403402c20d6434c54ee1529c3a1c78753ddc00b1b116fb19df"} Feb 02 10:45:01 crc kubenswrapper[4845]: I0202 10:45:01.243204 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" podStartSLOduration=1.243177067 podStartE2EDuration="1.243177067s" podCreationTimestamp="2026-02-02 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:45:01.23844357 +0000 UTC m=+782.329845020" watchObservedRunningTime="2026-02-02 10:45:01.243177067 +0000 UTC m=+782.334578517" Feb 02 10:45:02 crc kubenswrapper[4845]: I0202 10:45:02.235430 4845 generic.go:334] "Generic (PLEG): container finished" podID="6af5c06e-cf07-4f85-97e9-6b93ec03281c" containerID="a18e0c7c2dae09d3f5627d9a9acf7414e19ce7a97f56c94017a2bc4812f89130" exitCode=0 Feb 02 10:45:02 crc kubenswrapper[4845]: I0202 10:45:02.235558 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" event={"ID":"6af5c06e-cf07-4f85-97e9-6b93ec03281c","Type":"ContainerDied","Data":"a18e0c7c2dae09d3f5627d9a9acf7414e19ce7a97f56c94017a2bc4812f89130"} Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.472280 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.653066 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6af5c06e-cf07-4f85-97e9-6b93ec03281c-secret-volume\") pod \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.653253 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdvk9\" (UniqueName: \"kubernetes.io/projected/6af5c06e-cf07-4f85-97e9-6b93ec03281c-kube-api-access-qdvk9\") pod \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.653306 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6af5c06e-cf07-4f85-97e9-6b93ec03281c-config-volume\") pod \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\" (UID: \"6af5c06e-cf07-4f85-97e9-6b93ec03281c\") " Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.654267 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af5c06e-cf07-4f85-97e9-6b93ec03281c-config-volume" (OuterVolumeSpecName: "config-volume") pod "6af5c06e-cf07-4f85-97e9-6b93ec03281c" (UID: "6af5c06e-cf07-4f85-97e9-6b93ec03281c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.657589 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af5c06e-cf07-4f85-97e9-6b93ec03281c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6af5c06e-cf07-4f85-97e9-6b93ec03281c" (UID: "6af5c06e-cf07-4f85-97e9-6b93ec03281c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.657659 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af5c06e-cf07-4f85-97e9-6b93ec03281c-kube-api-access-qdvk9" (OuterVolumeSpecName: "kube-api-access-qdvk9") pod "6af5c06e-cf07-4f85-97e9-6b93ec03281c" (UID: "6af5c06e-cf07-4f85-97e9-6b93ec03281c"). InnerVolumeSpecName "kube-api-access-qdvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.755567 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdvk9\" (UniqueName: \"kubernetes.io/projected/6af5c06e-cf07-4f85-97e9-6b93ec03281c-kube-api-access-qdvk9\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.755634 4845 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6af5c06e-cf07-4f85-97e9-6b93ec03281c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.755645 4845 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6af5c06e-cf07-4f85-97e9-6b93ec03281c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:03 crc kubenswrapper[4845]: I0202 10:45:03.774464 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 02 10:45:04 crc kubenswrapper[4845]: I0202 10:45:04.255026 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" event={"ID":"6af5c06e-cf07-4f85-97e9-6b93ec03281c","Type":"ContainerDied","Data":"bc665438186f6e403402c20d6434c54ee1529c3a1c78753ddc00b1b116fb19df"} Feb 02 10:45:04 crc kubenswrapper[4845]: I0202 10:45:04.255656 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc665438186f6e403402c20d6434c54ee1529c3a1c78753ddc00b1b116fb19df" Feb 02 10:45:04 crc kubenswrapper[4845]: I0202 10:45:04.255730 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.231548 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-qdkf9"] Feb 02 10:45:22 crc kubenswrapper[4845]: E0202 10:45:22.232404 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af5c06e-cf07-4f85-97e9-6b93ec03281c" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.232420 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af5c06e-cf07-4f85-97e9-6b93ec03281c" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.232565 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af5c06e-cf07-4f85-97e9-6b93ec03281c" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.233134 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.235934 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-4mdxq" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.235957 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.236042 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.236685 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.239260 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.247806 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.252287 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-qdkf9"] Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362102 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362154 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzn7x\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-kube-api-access-fzn7x\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362187 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-sa-token\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362210 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-token\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362240 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-entrypoint\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362267 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-trusted-ca\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362303 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362326 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config-openshift-service-cacrt\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362358 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4649e099-a892-4773-86ef-705fea600417-datadir\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362394 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.362414 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4649e099-a892-4773-86ef-705fea600417-tmp\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.385657 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-qdkf9"] Feb 02 10:45:22 crc kubenswrapper[4845]: E0202 10:45:22.386250 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-fzn7x metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-qdkf9" podUID="4649e099-a892-4773-86ef-705fea600417" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465241 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-trusted-ca\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465339 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465373 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config-openshift-service-cacrt\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465409 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4649e099-a892-4773-86ef-705fea600417-datadir\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465439 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465456 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4649e099-a892-4773-86ef-705fea600417-tmp\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465491 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465511 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzn7x\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-kube-api-access-fzn7x\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465539 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-sa-token\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465562 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-token\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.465589 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-entrypoint\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: E0202 10:45:22.466154 4845 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 02 10:45:22 crc kubenswrapper[4845]: E0202 10:45:22.466279 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver podName:4649e099-a892-4773-86ef-705fea600417 nodeName:}" failed. No retries permitted until 2026-02-02 10:45:22.966245731 +0000 UTC m=+804.057647191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver") pod "collector-qdkf9" (UID: "4649e099-a892-4773-86ef-705fea600417") : secret "collector-syslog-receiver" not found Feb 02 10:45:22 crc kubenswrapper[4845]: E0202 10:45:22.466425 4845 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 02 10:45:22 crc kubenswrapper[4845]: E0202 10:45:22.466499 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics podName:4649e099-a892-4773-86ef-705fea600417 nodeName:}" failed. No retries permitted until 2026-02-02 10:45:22.966479018 +0000 UTC m=+804.057880458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics") pod "collector-qdkf9" (UID: "4649e099-a892-4773-86ef-705fea600417") : secret "collector-metrics" not found Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.466735 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4649e099-a892-4773-86ef-705fea600417-datadir\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.466827 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-entrypoint\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.467489 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config-openshift-service-cacrt\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.467627 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-trusted-ca\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.467923 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.477333 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-token\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.477704 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4649e099-a892-4773-86ef-705fea600417-tmp\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.491920 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzn7x\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-kube-api-access-fzn7x\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.494397 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-sa-token\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.973637 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.974005 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.978099 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:22 crc kubenswrapper[4845]: I0202 10:45:22.982958 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics\") pod \"collector-qdkf9\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " pod="openshift-logging/collector-qdkf9" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.395918 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qdkf9" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.406642 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qdkf9" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.581694 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-sa-token\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582049 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4649e099-a892-4773-86ef-705fea600417-tmp\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582070 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582087 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4649e099-a892-4773-86ef-705fea600417-datadir\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582144 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-token\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582166 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582201 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582240 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-trusted-ca\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582250 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4649e099-a892-4773-86ef-705fea600417-datadir" (OuterVolumeSpecName: "datadir") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582331 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-entrypoint\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582353 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzn7x\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-kube-api-access-fzn7x\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582429 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config-openshift-service-cacrt\") pod \"4649e099-a892-4773-86ef-705fea600417\" (UID: \"4649e099-a892-4773-86ef-705fea600417\") " Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582717 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config" (OuterVolumeSpecName: "config") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.582968 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.583071 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.583084 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.583094 4845 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4649e099-a892-4773-86ef-705fea600417-datadir\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.583107 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.585716 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics" (OuterVolumeSpecName: "metrics") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.585787 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-sa-token" (OuterVolumeSpecName: "sa-token") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.585917 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-kube-api-access-fzn7x" (OuterVolumeSpecName: "kube-api-access-fzn7x") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "kube-api-access-fzn7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.586116 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-token" (OuterVolumeSpecName: "collector-token") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.586490 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.586652 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4649e099-a892-4773-86ef-705fea600417-tmp" (OuterVolumeSpecName: "tmp") pod "4649e099-a892-4773-86ef-705fea600417" (UID: "4649e099-a892-4773-86ef-705fea600417"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685145 4845 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685183 4845 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4649e099-a892-4773-86ef-705fea600417-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685194 4845 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685205 4845 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685214 4845 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4649e099-a892-4773-86ef-705fea600417-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685223 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685231 4845 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685239 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzn7x\" (UniqueName: \"kubernetes.io/projected/4649e099-a892-4773-86ef-705fea600417-kube-api-access-fzn7x\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4845]: I0202 10:45:23.685251 4845 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4649e099-a892-4773-86ef-705fea600417-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.401873 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qdkf9" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.436329 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-qdkf9"] Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.451974 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-qdkf9"] Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.457041 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-bkwj8"] Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.458038 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.459924 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.462055 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.462296 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-4mdxq" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.462679 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.462941 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.471281 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.475875 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-bkwj8"] Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.598686 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-trusted-ca\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.598767 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-entrypoint\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.598822 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54453df2-b815-42be-9542-aef7eed68aeb-tmp\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.598873 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-collector-token\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.598927 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw47z\" (UniqueName: \"kubernetes.io/projected/54453df2-b815-42be-9542-aef7eed68aeb-kube-api-access-pw47z\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.599011 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-collector-syslog-receiver\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.599032 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-config\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.599088 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/54453df2-b815-42be-9542-aef7eed68aeb-datadir\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.599114 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-metrics\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.599252 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-config-openshift-service-cacrt\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.599297 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/54453df2-b815-42be-9542-aef7eed68aeb-sa-token\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701091 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/54453df2-b815-42be-9542-aef7eed68aeb-datadir\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701148 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-metrics\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701200 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/54453df2-b815-42be-9542-aef7eed68aeb-datadir\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701295 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-config-openshift-service-cacrt\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701324 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/54453df2-b815-42be-9542-aef7eed68aeb-sa-token\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701387 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-trusted-ca\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701411 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-entrypoint\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701432 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54453df2-b815-42be-9542-aef7eed68aeb-tmp\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701463 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-collector-token\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701487 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw47z\" (UniqueName: \"kubernetes.io/projected/54453df2-b815-42be-9542-aef7eed68aeb-kube-api-access-pw47z\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701533 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-collector-syslog-receiver\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.701557 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-config\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.702631 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-trusted-ca\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.702644 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-config\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.702678 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-entrypoint\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.702802 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/54453df2-b815-42be-9542-aef7eed68aeb-config-openshift-service-cacrt\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.710519 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-collector-syslog-receiver\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.714409 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-metrics\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.714620 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54453df2-b815-42be-9542-aef7eed68aeb-tmp\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.715581 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/54453df2-b815-42be-9542-aef7eed68aeb-collector-token\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.717690 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/54453df2-b815-42be-9542-aef7eed68aeb-sa-token\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.726589 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw47z\" (UniqueName: \"kubernetes.io/projected/54453df2-b815-42be-9542-aef7eed68aeb-kube-api-access-pw47z\") pod \"collector-bkwj8\" (UID: \"54453df2-b815-42be-9542-aef7eed68aeb\") " pod="openshift-logging/collector-bkwj8" Feb 02 10:45:24 crc kubenswrapper[4845]: I0202 10:45:24.782010 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-bkwj8" Feb 02 10:45:25 crc kubenswrapper[4845]: I0202 10:45:25.229952 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-bkwj8"] Feb 02 10:45:25 crc kubenswrapper[4845]: I0202 10:45:25.409492 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-bkwj8" event={"ID":"54453df2-b815-42be-9542-aef7eed68aeb","Type":"ContainerStarted","Data":"cf1be961458e23208d77e538c1db92cce617f00a58998df1c1158144ad2c8432"} Feb 02 10:45:25 crc kubenswrapper[4845]: I0202 10:45:25.723683 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4649e099-a892-4773-86ef-705fea600417" path="/var/lib/kubelet/pods/4649e099-a892-4773-86ef-705fea600417/volumes" Feb 02 10:45:32 crc kubenswrapper[4845]: I0202 10:45:32.464052 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-bkwj8" event={"ID":"54453df2-b815-42be-9542-aef7eed68aeb","Type":"ContainerStarted","Data":"3a32a6da534ae3eb91e672e8dae3cc2f5b718751b16f3faa4c4578bf1c0f5f3e"} Feb 02 10:45:32 crc kubenswrapper[4845]: I0202 10:45:32.487029 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-bkwj8" podStartSLOduration=1.964923389 podStartE2EDuration="8.487012178s" podCreationTimestamp="2026-02-02 10:45:24 +0000 UTC" firstStartedPulling="2026-02-02 10:45:25.246464009 +0000 UTC m=+806.337865459" lastFinishedPulling="2026-02-02 10:45:31.768552798 +0000 UTC m=+812.859954248" observedRunningTime="2026-02-02 10:45:32.483758584 +0000 UTC m=+813.575160034" watchObservedRunningTime="2026-02-02 10:45:32.487012178 +0000 UTC m=+813.578413628" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.372110 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq"] Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.374950 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.378023 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.445088 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq"] Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.520699 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.520782 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.520870 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scml7\" (UniqueName: \"kubernetes.io/projected/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-kube-api-access-scml7\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.622817 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.622915 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.622972 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scml7\" (UniqueName: \"kubernetes.io/projected/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-kube-api-access-scml7\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.623532 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.623825 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.643707 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scml7\" (UniqueName: \"kubernetes.io/projected/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-kube-api-access-scml7\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:01 crc kubenswrapper[4845]: I0202 10:46:01.697768 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:02 crc kubenswrapper[4845]: I0202 10:46:02.146436 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq"] Feb 02 10:46:02 crc kubenswrapper[4845]: I0202 10:46:02.669924 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" event={"ID":"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92","Type":"ContainerStarted","Data":"0ed13668fba3d3c04be42e0559bf8f0a6f2a0025f1a528d3973c4070b9282195"} Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.677148 4845 generic.go:334] "Generic (PLEG): container finished" podID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerID="289acee830b3427c14016b1f2c49bd685323c61f36d437f388f65b1c9a1a61a8" exitCode=0 Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.677201 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" event={"ID":"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92","Type":"ContainerDied","Data":"289acee830b3427c14016b1f2c49bd685323c61f36d437f388f65b1c9a1a61a8"} Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.709731 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k47lx"] Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.711252 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.721395 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k47lx"] Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.770790 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxhbf\" (UniqueName: \"kubernetes.io/projected/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-kube-api-access-jxhbf\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.771098 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-catalog-content\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.771239 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-utilities\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.873576 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxhbf\" (UniqueName: \"kubernetes.io/projected/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-kube-api-access-jxhbf\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.873660 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-catalog-content\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.873702 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-utilities\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.874224 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-catalog-content\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.874301 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-utilities\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:03 crc kubenswrapper[4845]: I0202 10:46:03.894052 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxhbf\" (UniqueName: \"kubernetes.io/projected/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-kube-api-access-jxhbf\") pod \"redhat-operators-k47lx\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:04 crc kubenswrapper[4845]: I0202 10:46:04.033846 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:04 crc kubenswrapper[4845]: I0202 10:46:04.508503 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k47lx"] Feb 02 10:46:04 crc kubenswrapper[4845]: I0202 10:46:04.688036 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k47lx" event={"ID":"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b","Type":"ContainerStarted","Data":"f64227e8467858e78a947aa30dbb8e5c523b4bf1e03d44b55588307424690239"} Feb 02 10:46:05 crc kubenswrapper[4845]: I0202 10:46:05.696610 4845 generic.go:334] "Generic (PLEG): container finished" podID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerID="376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f" exitCode=0 Feb 02 10:46:05 crc kubenswrapper[4845]: I0202 10:46:05.696662 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k47lx" event={"ID":"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b","Type":"ContainerDied","Data":"376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f"} Feb 02 10:46:05 crc kubenswrapper[4845]: I0202 10:46:05.699573 4845 generic.go:334] "Generic (PLEG): container finished" podID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerID="77bf7f8d7a6baaa76df13bbe774d42150b872098bd56855ba76680ab82eff2d6" exitCode=0 Feb 02 10:46:05 crc kubenswrapper[4845]: I0202 10:46:05.699607 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" event={"ID":"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92","Type":"ContainerDied","Data":"77bf7f8d7a6baaa76df13bbe774d42150b872098bd56855ba76680ab82eff2d6"} Feb 02 10:46:06 crc kubenswrapper[4845]: I0202 10:46:06.706746 4845 generic.go:334] "Generic (PLEG): container finished" podID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerID="3727a44ded759bc6d1d03787f972a5d9f13741e7eab3d69e1d386762e48782aa" exitCode=0 Feb 02 10:46:06 crc kubenswrapper[4845]: I0202 10:46:06.707060 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" event={"ID":"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92","Type":"ContainerDied","Data":"3727a44ded759bc6d1d03787f972a5d9f13741e7eab3d69e1d386762e48782aa"} Feb 02 10:46:06 crc kubenswrapper[4845]: I0202 10:46:06.709176 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k47lx" event={"ID":"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b","Type":"ContainerStarted","Data":"608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616"} Feb 02 10:46:07 crc kubenswrapper[4845]: I0202 10:46:07.718455 4845 generic.go:334] "Generic (PLEG): container finished" podID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerID="608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616" exitCode=0 Feb 02 10:46:07 crc kubenswrapper[4845]: I0202 10:46:07.727997 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k47lx" event={"ID":"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b","Type":"ContainerDied","Data":"608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616"} Feb 02 10:46:07 crc kubenswrapper[4845]: I0202 10:46:07.972761 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.140248 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scml7\" (UniqueName: \"kubernetes.io/projected/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-kube-api-access-scml7\") pod \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.140411 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-bundle\") pod \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.140445 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-util\") pod \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\" (UID: \"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92\") " Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.152471 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-bundle" (OuterVolumeSpecName: "bundle") pod "cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" (UID: "cbe2dd1b-0b96-4fb7-8873-f9c1378bde92"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.158987 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-kube-api-access-scml7" (OuterVolumeSpecName: "kube-api-access-scml7") pod "cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" (UID: "cbe2dd1b-0b96-4fb7-8873-f9c1378bde92"). InnerVolumeSpecName "kube-api-access-scml7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.242801 4845 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.242836 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scml7\" (UniqueName: \"kubernetes.io/projected/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-kube-api-access-scml7\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.264481 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-util" (OuterVolumeSpecName: "util") pod "cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" (UID: "cbe2dd1b-0b96-4fb7-8873-f9c1378bde92"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.344239 4845 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbe2dd1b-0b96-4fb7-8873-f9c1378bde92-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.727218 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k47lx" event={"ID":"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b","Type":"ContainerStarted","Data":"addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c"} Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.729570 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" event={"ID":"cbe2dd1b-0b96-4fb7-8873-f9c1378bde92","Type":"ContainerDied","Data":"0ed13668fba3d3c04be42e0559bf8f0a6f2a0025f1a528d3973c4070b9282195"} Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.729613 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed13668fba3d3c04be42e0559bf8f0a6f2a0025f1a528d3973c4070b9282195" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.729620 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq" Feb 02 10:46:08 crc kubenswrapper[4845]: I0202 10:46:08.758183 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k47lx" podStartSLOduration=3.151007862 podStartE2EDuration="5.75815783s" podCreationTimestamp="2026-02-02 10:46:03 +0000 UTC" firstStartedPulling="2026-02-02 10:46:05.698264845 +0000 UTC m=+846.789666295" lastFinishedPulling="2026-02-02 10:46:08.305414813 +0000 UTC m=+849.396816263" observedRunningTime="2026-02-02 10:46:08.751065225 +0000 UTC m=+849.842466685" watchObservedRunningTime="2026-02-02 10:46:08.75815783 +0000 UTC m=+849.849559280" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.023714 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xpndf"] Feb 02 10:46:11 crc kubenswrapper[4845]: E0202 10:46:11.024295 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerName="util" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.024313 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerName="util" Feb 02 10:46:11 crc kubenswrapper[4845]: E0202 10:46:11.024350 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerName="pull" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.024358 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerName="pull" Feb 02 10:46:11 crc kubenswrapper[4845]: E0202 10:46:11.024371 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerName="extract" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.024379 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerName="extract" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.036219 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe2dd1b-0b96-4fb7-8873-f9c1378bde92" containerName="extract" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.037415 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xpndf" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.040900 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.041069 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.041124 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-c6cxh" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.050845 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xpndf"] Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.186697 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6t4\" (UniqueName: \"kubernetes.io/projected/17b0c917-994c-41bc-9fbf-6e9d86d65bca-kube-api-access-lg6t4\") pod \"nmstate-operator-646758c888-xpndf\" (UID: \"17b0c917-994c-41bc-9fbf-6e9d86d65bca\") " pod="openshift-nmstate/nmstate-operator-646758c888-xpndf" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.289194 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg6t4\" (UniqueName: \"kubernetes.io/projected/17b0c917-994c-41bc-9fbf-6e9d86d65bca-kube-api-access-lg6t4\") pod \"nmstate-operator-646758c888-xpndf\" (UID: \"17b0c917-994c-41bc-9fbf-6e9d86d65bca\") " pod="openshift-nmstate/nmstate-operator-646758c888-xpndf" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.312687 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg6t4\" (UniqueName: \"kubernetes.io/projected/17b0c917-994c-41bc-9fbf-6e9d86d65bca-kube-api-access-lg6t4\") pod \"nmstate-operator-646758c888-xpndf\" (UID: \"17b0c917-994c-41bc-9fbf-6e9d86d65bca\") " pod="openshift-nmstate/nmstate-operator-646758c888-xpndf" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.410837 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xpndf" Feb 02 10:46:11 crc kubenswrapper[4845]: I0202 10:46:11.997243 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xpndf"] Feb 02 10:46:12 crc kubenswrapper[4845]: W0202 10:46:12.007906 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b0c917_994c_41bc_9fbf_6e9d86d65bca.slice/crio-233d65503c4378f7c1557b5f02dbc6eb5cced22633f6b0ca4577a6370816cc5d WatchSource:0}: Error finding container 233d65503c4378f7c1557b5f02dbc6eb5cced22633f6b0ca4577a6370816cc5d: Status 404 returned error can't find the container with id 233d65503c4378f7c1557b5f02dbc6eb5cced22633f6b0ca4577a6370816cc5d Feb 02 10:46:12 crc kubenswrapper[4845]: I0202 10:46:12.763713 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xpndf" event={"ID":"17b0c917-994c-41bc-9fbf-6e9d86d65bca","Type":"ContainerStarted","Data":"233d65503c4378f7c1557b5f02dbc6eb5cced22633f6b0ca4577a6370816cc5d"} Feb 02 10:46:14 crc kubenswrapper[4845]: I0202 10:46:14.035039 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:14 crc kubenswrapper[4845]: I0202 10:46:14.035087 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:14 crc kubenswrapper[4845]: I0202 10:46:14.800195 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xpndf" event={"ID":"17b0c917-994c-41bc-9fbf-6e9d86d65bca","Type":"ContainerStarted","Data":"5158065c1ba3f85ebdda886573c78529f7686a61d326243096b2ff309c90c42e"} Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.077519 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k47lx" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="registry-server" probeResult="failure" output=< Feb 02 10:46:15 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 10:46:15 crc kubenswrapper[4845]: > Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.856650 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-xpndf" podStartSLOduration=2.297230556 podStartE2EDuration="4.856629356s" podCreationTimestamp="2026-02-02 10:46:11 +0000 UTC" firstStartedPulling="2026-02-02 10:46:12.010510213 +0000 UTC m=+853.101911663" lastFinishedPulling="2026-02-02 10:46:14.569909013 +0000 UTC m=+855.661310463" observedRunningTime="2026-02-02 10:46:14.820337586 +0000 UTC m=+855.911739026" watchObservedRunningTime="2026-02-02 10:46:15.856629356 +0000 UTC m=+856.948030816" Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.860171 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ksh5c"] Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.873787 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5"] Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.862294 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.907600 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ks7bq"] Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.908472 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.908645 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.910753 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.915428 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mw55t" Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.915626 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ksh5c"] Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.920965 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5"] Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.965074 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqp4z\" (UniqueName: \"kubernetes.io/projected/ed30c5ac-3449-4902-b948-34958198b224-kube-api-access-wqp4z\") pod \"nmstate-webhook-8474b5b9d8-k2dv5\" (UID: \"ed30c5ac-3449-4902-b948-34958198b224\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.965520 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed30c5ac-3449-4902-b948-34958198b224-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k2dv5\" (UID: \"ed30c5ac-3449-4902-b948-34958198b224\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:15 crc kubenswrapper[4845]: I0202 10:46:15.965623 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvfw\" (UniqueName: \"kubernetes.io/projected/65b8d7a7-4de6-4edc-b652-999572c3494a-kube-api-access-4nvfw\") pod \"nmstate-metrics-54757c584b-ksh5c\" (UID: \"65b8d7a7-4de6-4edc-b652-999572c3494a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.030398 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4"] Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.031514 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.034291 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jjxtg" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.035479 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.035626 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.055376 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4"] Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.067790 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d66b5\" (UniqueName: \"kubernetes.io/projected/8c3ff69a-c422-491b-a933-0522f29d7e7c-kube-api-access-d66b5\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.068395 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-nmstate-lock\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.068550 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqp4z\" (UniqueName: \"kubernetes.io/projected/ed30c5ac-3449-4902-b948-34958198b224-kube-api-access-wqp4z\") pod \"nmstate-webhook-8474b5b9d8-k2dv5\" (UID: \"ed30c5ac-3449-4902-b948-34958198b224\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.068631 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-dbus-socket\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.068706 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed30c5ac-3449-4902-b948-34958198b224-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k2dv5\" (UID: \"ed30c5ac-3449-4902-b948-34958198b224\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.068794 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-ovs-socket\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.068875 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvfw\" (UniqueName: \"kubernetes.io/projected/65b8d7a7-4de6-4edc-b652-999572c3494a-kube-api-access-4nvfw\") pod \"nmstate-metrics-54757c584b-ksh5c\" (UID: \"65b8d7a7-4de6-4edc-b652-999572c3494a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" Feb 02 10:46:16 crc kubenswrapper[4845]: E0202 10:46:16.068934 4845 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 10:46:16 crc kubenswrapper[4845]: E0202 10:46:16.069221 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed30c5ac-3449-4902-b948-34958198b224-tls-key-pair podName:ed30c5ac-3449-4902-b948-34958198b224 nodeName:}" failed. No retries permitted until 2026-02-02 10:46:16.569200116 +0000 UTC m=+857.660601556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ed30c5ac-3449-4902-b948-34958198b224-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-k2dv5" (UID: "ed30c5ac-3449-4902-b948-34958198b224") : secret "openshift-nmstate-webhook" not found Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.097967 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqp4z\" (UniqueName: \"kubernetes.io/projected/ed30c5ac-3449-4902-b948-34958198b224-kube-api-access-wqp4z\") pod \"nmstate-webhook-8474b5b9d8-k2dv5\" (UID: \"ed30c5ac-3449-4902-b948-34958198b224\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.104136 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvfw\" (UniqueName: \"kubernetes.io/projected/65b8d7a7-4de6-4edc-b652-999572c3494a-kube-api-access-4nvfw\") pod \"nmstate-metrics-54757c584b-ksh5c\" (UID: \"65b8d7a7-4de6-4edc-b652-999572c3494a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.170459 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d66b5\" (UniqueName: \"kubernetes.io/projected/8c3ff69a-c422-491b-a933-0522f29d7e7c-kube-api-access-d66b5\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.170541 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-nmstate-lock\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.170599 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-dbus-socket\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.170630 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.170683 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-ovs-socket\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.170728 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.170790 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229lr\" (UniqueName: \"kubernetes.io/projected/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-kube-api-access-229lr\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.170932 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-nmstate-lock\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.171035 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-ovs-socket\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.171260 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c3ff69a-c422-491b-a933-0522f29d7e7c-dbus-socket\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.194955 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d66b5\" (UniqueName: \"kubernetes.io/projected/8c3ff69a-c422-491b-a933-0522f29d7e7c-kube-api-access-d66b5\") pod \"nmstate-handler-ks7bq\" (UID: \"8c3ff69a-c422-491b-a933-0522f29d7e7c\") " pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.229612 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58d87f97d7-w9v5x"] Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.230652 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.237060 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.249234 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.277487 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.277816 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.278029 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229lr\" (UniqueName: \"kubernetes.io/projected/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-kube-api-access-229lr\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.279548 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.289755 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.297080 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d87f97d7-w9v5x"] Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.328552 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229lr\" (UniqueName: \"kubernetes.io/projected/f49a4fe2-aa60-4d14-a9bb-f13d0066a542-kube-api-access-229lr\") pod \"nmstate-console-plugin-7754f76f8b-2phr4\" (UID: \"f49a4fe2-aa60-4d14-a9bb-f13d0066a542\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.363001 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.380851 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-trusted-ca-bundle\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.381147 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-serving-cert\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.381287 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-service-ca\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.397572 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6kt\" (UniqueName: \"kubernetes.io/projected/c26d4007-db0b-4379-8431-d6e43dec7e9f-kube-api-access-fp6kt\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.397628 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-config\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.397765 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-oauth-serving-cert\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.397855 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-oauth-config\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.499924 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-oauth-serving-cert\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.500290 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-oauth-config\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.500389 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-trusted-ca-bundle\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.500416 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-serving-cert\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.500451 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-service-ca\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.500486 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6kt\" (UniqueName: \"kubernetes.io/projected/c26d4007-db0b-4379-8431-d6e43dec7e9f-kube-api-access-fp6kt\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.500516 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-config\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.502066 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-config\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.506689 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-oauth-serving-cert\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.506951 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-service-ca\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.507670 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-trusted-ca-bundle\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.514901 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-serving-cert\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.514916 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-oauth-config\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.520444 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6kt\" (UniqueName: \"kubernetes.io/projected/c26d4007-db0b-4379-8431-d6e43dec7e9f-kube-api-access-fp6kt\") pod \"console-58d87f97d7-w9v5x\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.602279 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed30c5ac-3449-4902-b948-34958198b224-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k2dv5\" (UID: \"ed30c5ac-3449-4902-b948-34958198b224\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.605643 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed30c5ac-3449-4902-b948-34958198b224-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k2dv5\" (UID: \"ed30c5ac-3449-4902-b948-34958198b224\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.634942 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.714858 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ksh5c"] Feb 02 10:46:16 crc kubenswrapper[4845]: W0202 10:46:16.722514 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65b8d7a7_4de6_4edc_b652_999572c3494a.slice/crio-4370510a95928b8ff4e998d2870157f06396ec4c3f14b64b55e2685d12d2e497 WatchSource:0}: Error finding container 4370510a95928b8ff4e998d2870157f06396ec4c3f14b64b55e2685d12d2e497: Status 404 returned error can't find the container with id 4370510a95928b8ff4e998d2870157f06396ec4c3f14b64b55e2685d12d2e497 Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.822084 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4"] Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.822247 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ks7bq" event={"ID":"8c3ff69a-c422-491b-a933-0522f29d7e7c","Type":"ContainerStarted","Data":"0b396858010a9081426806fe57a0ca1beb98ee5e036d024acf2ec7bd2bdce074"} Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.823338 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" event={"ID":"65b8d7a7-4de6-4edc-b652-999572c3494a","Type":"ContainerStarted","Data":"4370510a95928b8ff4e998d2870157f06396ec4c3f14b64b55e2685d12d2e497"} Feb 02 10:46:16 crc kubenswrapper[4845]: W0202 10:46:16.828663 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf49a4fe2_aa60_4d14_a9bb_f13d0066a542.slice/crio-c11f12d1825f59697a4f6dadda31bbeed2e0a28a3b18e733bc9c274834f1aff5 WatchSource:0}: Error finding container c11f12d1825f59697a4f6dadda31bbeed2e0a28a3b18e733bc9c274834f1aff5: Status 404 returned error can't find the container with id c11f12d1825f59697a4f6dadda31bbeed2e0a28a3b18e733bc9c274834f1aff5 Feb 02 10:46:16 crc kubenswrapper[4845]: I0202 10:46:16.873474 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:17 crc kubenswrapper[4845]: I0202 10:46:17.137740 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d87f97d7-w9v5x"] Feb 02 10:46:17 crc kubenswrapper[4845]: I0202 10:46:17.281989 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5"] Feb 02 10:46:17 crc kubenswrapper[4845]: W0202 10:46:17.285282 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded30c5ac_3449_4902_b948_34958198b224.slice/crio-b23b5dae7ddb77a1a149d83048e0cfec022d92d9b484987d3ab7f09e2305eec8 WatchSource:0}: Error finding container b23b5dae7ddb77a1a149d83048e0cfec022d92d9b484987d3ab7f09e2305eec8: Status 404 returned error can't find the container with id b23b5dae7ddb77a1a149d83048e0cfec022d92d9b484987d3ab7f09e2305eec8 Feb 02 10:46:17 crc kubenswrapper[4845]: I0202 10:46:17.840874 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" event={"ID":"f49a4fe2-aa60-4d14-a9bb-f13d0066a542","Type":"ContainerStarted","Data":"c11f12d1825f59697a4f6dadda31bbeed2e0a28a3b18e733bc9c274834f1aff5"} Feb 02 10:46:17 crc kubenswrapper[4845]: I0202 10:46:17.842416 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" event={"ID":"ed30c5ac-3449-4902-b948-34958198b224","Type":"ContainerStarted","Data":"b23b5dae7ddb77a1a149d83048e0cfec022d92d9b484987d3ab7f09e2305eec8"} Feb 02 10:46:17 crc kubenswrapper[4845]: I0202 10:46:17.844358 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d87f97d7-w9v5x" event={"ID":"c26d4007-db0b-4379-8431-d6e43dec7e9f","Type":"ContainerStarted","Data":"111832ec0e0c78c364956282064b05b1c4c2ce296de61e9f1fa4fa6702a3f91d"} Feb 02 10:46:17 crc kubenswrapper[4845]: I0202 10:46:17.844386 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d87f97d7-w9v5x" event={"ID":"c26d4007-db0b-4379-8431-d6e43dec7e9f","Type":"ContainerStarted","Data":"b03473270e278212cfa587bd83c780a50bc52c823bf36377b8f5efc441c8224f"} Feb 02 10:46:17 crc kubenswrapper[4845]: I0202 10:46:17.863421 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58d87f97d7-w9v5x" podStartSLOduration=1.863404506 podStartE2EDuration="1.863404506s" podCreationTimestamp="2026-02-02 10:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:46:17.86178748 +0000 UTC m=+858.953188930" watchObservedRunningTime="2026-02-02 10:46:17.863404506 +0000 UTC m=+858.954805976" Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.890868 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ks7bq" event={"ID":"8c3ff69a-c422-491b-a933-0522f29d7e7c","Type":"ContainerStarted","Data":"8bd793cd3386f047b04124c1ccf954ba7cd2b919a283ffdd15eb8d80fea4832c"} Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.891467 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.895249 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" event={"ID":"f49a4fe2-aa60-4d14-a9bb-f13d0066a542","Type":"ContainerStarted","Data":"85113e54fd98a3708a9495d9caed953134b5b79bcc2ad0e13f44fc9ac1034690"} Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.897249 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" event={"ID":"ed30c5ac-3449-4902-b948-34958198b224","Type":"ContainerStarted","Data":"9d6f684a704674dd343455675354d91ac207516624064051e805d36c4322c9bd"} Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.898146 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.899840 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" event={"ID":"65b8d7a7-4de6-4edc-b652-999572c3494a","Type":"ContainerStarted","Data":"e363f73f23b63099effb3423c08e9d4d424b14b7b8aef0396bc2fe795323940b"} Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.917591 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ks7bq" podStartSLOduration=2.476387979 podStartE2EDuration="5.917568576s" podCreationTimestamp="2026-02-02 10:46:15 +0000 UTC" firstStartedPulling="2026-02-02 10:46:16.342376106 +0000 UTC m=+857.433777556" lastFinishedPulling="2026-02-02 10:46:19.783556703 +0000 UTC m=+860.874958153" observedRunningTime="2026-02-02 10:46:20.910230984 +0000 UTC m=+862.001632444" watchObservedRunningTime="2026-02-02 10:46:20.917568576 +0000 UTC m=+862.008970026" Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.944239 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" podStartSLOduration=3.31951046 podStartE2EDuration="5.944204535s" podCreationTimestamp="2026-02-02 10:46:15 +0000 UTC" firstStartedPulling="2026-02-02 10:46:17.290085158 +0000 UTC m=+858.381486608" lastFinishedPulling="2026-02-02 10:46:19.914779223 +0000 UTC m=+861.006180683" observedRunningTime="2026-02-02 10:46:20.931381975 +0000 UTC m=+862.022783465" watchObservedRunningTime="2026-02-02 10:46:20.944204535 +0000 UTC m=+862.035606025" Feb 02 10:46:20 crc kubenswrapper[4845]: I0202 10:46:20.955003 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2phr4" podStartSLOduration=1.885785652 podStartE2EDuration="4.954977116s" podCreationTimestamp="2026-02-02 10:46:16 +0000 UTC" firstStartedPulling="2026-02-02 10:46:16.833630594 +0000 UTC m=+857.925032044" lastFinishedPulling="2026-02-02 10:46:19.902822058 +0000 UTC m=+860.994223508" observedRunningTime="2026-02-02 10:46:20.948221961 +0000 UTC m=+862.039623411" watchObservedRunningTime="2026-02-02 10:46:20.954977116 +0000 UTC m=+862.046378576" Feb 02 10:46:23 crc kubenswrapper[4845]: I0202 10:46:23.934553 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" event={"ID":"65b8d7a7-4de6-4edc-b652-999572c3494a","Type":"ContainerStarted","Data":"950542cdbf87ca57f2e0292c5ef95b0a5cd16010d2698dda2673389b91c50ca7"} Feb 02 10:46:23 crc kubenswrapper[4845]: I0202 10:46:23.967033 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksh5c" podStartSLOduration=2.243106171 podStartE2EDuration="8.967002999s" podCreationTimestamp="2026-02-02 10:46:15 +0000 UTC" firstStartedPulling="2026-02-02 10:46:16.731760402 +0000 UTC m=+857.823161852" lastFinishedPulling="2026-02-02 10:46:23.45565723 +0000 UTC m=+864.547058680" observedRunningTime="2026-02-02 10:46:23.9621884 +0000 UTC m=+865.053589860" watchObservedRunningTime="2026-02-02 10:46:23.967002999 +0000 UTC m=+865.058404459" Feb 02 10:46:24 crc kubenswrapper[4845]: I0202 10:46:24.090657 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:24 crc kubenswrapper[4845]: I0202 10:46:24.156573 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:24 crc kubenswrapper[4845]: I0202 10:46:24.331471 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k47lx"] Feb 02 10:46:25 crc kubenswrapper[4845]: I0202 10:46:25.949780 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k47lx" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="registry-server" containerID="cri-o://addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c" gracePeriod=2 Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.277805 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ks7bq" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.401629 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.497278 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxhbf\" (UniqueName: \"kubernetes.io/projected/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-kube-api-access-jxhbf\") pod \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.497677 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-utilities\") pod \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.497704 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-catalog-content\") pod \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\" (UID: \"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b\") " Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.498505 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-utilities" (OuterVolumeSpecName: "utilities") pod "a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" (UID: "a4a4a654-e5c6-4af9-905f-9d4e5b9d032b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.502771 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-kube-api-access-jxhbf" (OuterVolumeSpecName: "kube-api-access-jxhbf") pod "a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" (UID: "a4a4a654-e5c6-4af9-905f-9d4e5b9d032b"). InnerVolumeSpecName "kube-api-access-jxhbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.599461 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxhbf\" (UniqueName: \"kubernetes.io/projected/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-kube-api-access-jxhbf\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.599491 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.614563 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" (UID: "a4a4a654-e5c6-4af9-905f-9d4e5b9d032b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.636021 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.636593 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.641354 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.700952 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.961023 4845 generic.go:334] "Generic (PLEG): container finished" podID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerID="addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c" exitCode=0 Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.961960 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k47lx" event={"ID":"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b","Type":"ContainerDied","Data":"addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c"} Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.962002 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k47lx" event={"ID":"a4a4a654-e5c6-4af9-905f-9d4e5b9d032b","Type":"ContainerDied","Data":"f64227e8467858e78a947aa30dbb8e5c523b4bf1e03d44b55588307424690239"} Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.962033 4845 scope.go:117] "RemoveContainer" containerID="addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.962471 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k47lx" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.965595 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:46:26 crc kubenswrapper[4845]: I0202 10:46:26.979537 4845 scope.go:117] "RemoveContainer" containerID="608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616" Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.008168 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k47lx"] Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.015534 4845 scope.go:117] "RemoveContainer" containerID="376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f" Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.015707 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k47lx"] Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.024821 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-dd6cc54dd-nz852"] Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.053212 4845 scope.go:117] "RemoveContainer" containerID="addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c" Feb 02 10:46:27 crc kubenswrapper[4845]: E0202 10:46:27.058226 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c\": container with ID starting with addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c not found: ID does not exist" containerID="addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c" Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.058267 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c"} err="failed to get container status \"addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c\": rpc error: code = NotFound desc = could not find container \"addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c\": container with ID starting with addd9f19be4b234210edb94ea30984f742598880804586a6dc7f1aa2a0e91c0c not found: ID does not exist" Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.058290 4845 scope.go:117] "RemoveContainer" containerID="608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616" Feb 02 10:46:27 crc kubenswrapper[4845]: E0202 10:46:27.074073 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616\": container with ID starting with 608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616 not found: ID does not exist" containerID="608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616" Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.074135 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616"} err="failed to get container status \"608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616\": rpc error: code = NotFound desc = could not find container \"608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616\": container with ID starting with 608bd3ebd7985eae016e771757b5b2d25d25b990f65c352487ec67efd2034616 not found: ID does not exist" Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.074168 4845 scope.go:117] "RemoveContainer" containerID="376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f" Feb 02 10:46:27 crc kubenswrapper[4845]: E0202 10:46:27.080958 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f\": container with ID starting with 376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f not found: ID does not exist" containerID="376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f" Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.081003 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f"} err="failed to get container status \"376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f\": rpc error: code = NotFound desc = could not find container \"376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f\": container with ID starting with 376394c283294d1180a4be60d934ca55a8664ab5d347e3d5ab14e268ca8f696f not found: ID does not exist" Feb 02 10:46:27 crc kubenswrapper[4845]: I0202 10:46:27.720784 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" path="/var/lib/kubelet/pods/a4a4a654-e5c6-4af9-905f-9d4e5b9d032b/volumes" Feb 02 10:46:36 crc kubenswrapper[4845]: I0202 10:46:36.879807 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k2dv5" Feb 02 10:46:46 crc kubenswrapper[4845]: I0202 10:46:46.239034 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:46:46 crc kubenswrapper[4845]: I0202 10:46:46.239679 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.078936 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-dd6cc54dd-nz852" podUID="760b8b36-f06d-49ac-9de5-72b222f509d0" containerName="console" containerID="cri-o://57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d" gracePeriod=15 Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.490637 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-dd6cc54dd-nz852_760b8b36-f06d-49ac-9de5-72b222f509d0/console/0.log" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.491092 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.563287 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9qjk\" (UniqueName: \"kubernetes.io/projected/760b8b36-f06d-49ac-9de5-72b222f509d0-kube-api-access-f9qjk\") pod \"760b8b36-f06d-49ac-9de5-72b222f509d0\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.563352 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-console-config\") pod \"760b8b36-f06d-49ac-9de5-72b222f509d0\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.563374 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-trusted-ca-bundle\") pod \"760b8b36-f06d-49ac-9de5-72b222f509d0\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.563427 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-oauth-serving-cert\") pod \"760b8b36-f06d-49ac-9de5-72b222f509d0\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.564264 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "760b8b36-f06d-49ac-9de5-72b222f509d0" (UID: "760b8b36-f06d-49ac-9de5-72b222f509d0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.564281 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "760b8b36-f06d-49ac-9de5-72b222f509d0" (UID: "760b8b36-f06d-49ac-9de5-72b222f509d0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.564325 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-serving-cert\") pod \"760b8b36-f06d-49ac-9de5-72b222f509d0\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.564320 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-console-config" (OuterVolumeSpecName: "console-config") pod "760b8b36-f06d-49ac-9de5-72b222f509d0" (UID: "760b8b36-f06d-49ac-9de5-72b222f509d0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.564346 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-service-ca\") pod \"760b8b36-f06d-49ac-9de5-72b222f509d0\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.564463 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-oauth-config\") pod \"760b8b36-f06d-49ac-9de5-72b222f509d0\" (UID: \"760b8b36-f06d-49ac-9de5-72b222f509d0\") " Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.565116 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-service-ca" (OuterVolumeSpecName: "service-ca") pod "760b8b36-f06d-49ac-9de5-72b222f509d0" (UID: "760b8b36-f06d-49ac-9de5-72b222f509d0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.565269 4845 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.565287 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.565296 4845 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.565305 4845 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/760b8b36-f06d-49ac-9de5-72b222f509d0-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.569440 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "760b8b36-f06d-49ac-9de5-72b222f509d0" (UID: "760b8b36-f06d-49ac-9de5-72b222f509d0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.572553 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760b8b36-f06d-49ac-9de5-72b222f509d0-kube-api-access-f9qjk" (OuterVolumeSpecName: "kube-api-access-f9qjk") pod "760b8b36-f06d-49ac-9de5-72b222f509d0" (UID: "760b8b36-f06d-49ac-9de5-72b222f509d0"). InnerVolumeSpecName "kube-api-access-f9qjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.579185 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "760b8b36-f06d-49ac-9de5-72b222f509d0" (UID: "760b8b36-f06d-49ac-9de5-72b222f509d0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.667234 4845 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.667278 4845 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/760b8b36-f06d-49ac-9de5-72b222f509d0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:52 crc kubenswrapper[4845]: I0202 10:46:52.667291 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9qjk\" (UniqueName: \"kubernetes.io/projected/760b8b36-f06d-49ac-9de5-72b222f509d0-kube-api-access-f9qjk\") on node \"crc\" DevicePath \"\"" Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.159027 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-dd6cc54dd-nz852_760b8b36-f06d-49ac-9de5-72b222f509d0/console/0.log" Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.160189 4845 generic.go:334] "Generic (PLEG): container finished" podID="760b8b36-f06d-49ac-9de5-72b222f509d0" containerID="57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d" exitCode=2 Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.160272 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dd6cc54dd-nz852" event={"ID":"760b8b36-f06d-49ac-9de5-72b222f509d0","Type":"ContainerDied","Data":"57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d"} Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.160304 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dd6cc54dd-nz852" event={"ID":"760b8b36-f06d-49ac-9de5-72b222f509d0","Type":"ContainerDied","Data":"3bdbcedf982f353d1f33f9e0800794674194143b9b56bad7bb0604d71624ce6e"} Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.160321 4845 scope.go:117] "RemoveContainer" containerID="57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d" Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.160349 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dd6cc54dd-nz852" Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.190974 4845 scope.go:117] "RemoveContainer" containerID="57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d" Feb 02 10:46:53 crc kubenswrapper[4845]: E0202 10:46:53.191383 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d\": container with ID starting with 57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d not found: ID does not exist" containerID="57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d" Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.191415 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d"} err="failed to get container status \"57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d\": rpc error: code = NotFound desc = could not find container \"57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d\": container with ID starting with 57cc98cec108b0b86f1d4c2be0f04ee12886cfce29725cbcd66fe5fd1120952d not found: ID does not exist" Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.192624 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-dd6cc54dd-nz852"] Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.199618 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-dd6cc54dd-nz852"] Feb 02 10:46:53 crc kubenswrapper[4845]: I0202 10:46:53.722663 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760b8b36-f06d-49ac-9de5-72b222f509d0" path="/var/lib/kubelet/pods/760b8b36-f06d-49ac-9de5-72b222f509d0/volumes" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.244610 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr"] Feb 02 10:46:54 crc kubenswrapper[4845]: E0202 10:46:54.244960 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="extract-utilities" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.244976 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="extract-utilities" Feb 02 10:46:54 crc kubenswrapper[4845]: E0202 10:46:54.244987 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760b8b36-f06d-49ac-9de5-72b222f509d0" containerName="console" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.244992 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="760b8b36-f06d-49ac-9de5-72b222f509d0" containerName="console" Feb 02 10:46:54 crc kubenswrapper[4845]: E0202 10:46:54.245017 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="registry-server" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.245025 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="registry-server" Feb 02 10:46:54 crc kubenswrapper[4845]: E0202 10:46:54.245039 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="extract-content" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.245045 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="extract-content" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.245170 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="760b8b36-f06d-49ac-9de5-72b222f509d0" containerName="console" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.245182 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a4a654-e5c6-4af9-905f-9d4e5b9d032b" containerName="registry-server" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.246259 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.248778 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.256199 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr"] Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.289826 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.289902 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7csnr\" (UniqueName: \"kubernetes.io/projected/2d07b157-761c-4649-ace7-6b9e73636713-kube-api-access-7csnr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.290148 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.391482 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.391658 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.391744 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7csnr\" (UniqueName: \"kubernetes.io/projected/2d07b157-761c-4649-ace7-6b9e73636713-kube-api-access-7csnr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.392427 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.392590 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.409325 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7csnr\" (UniqueName: \"kubernetes.io/projected/2d07b157-761c-4649-ace7-6b9e73636713-kube-api-access-7csnr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:54 crc kubenswrapper[4845]: I0202 10:46:54.599833 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:46:55 crc kubenswrapper[4845]: I0202 10:46:55.041459 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr"] Feb 02 10:46:55 crc kubenswrapper[4845]: I0202 10:46:55.175562 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" event={"ID":"2d07b157-761c-4649-ace7-6b9e73636713","Type":"ContainerStarted","Data":"09c9cd81aed13fcd5104531d64e7e1a40b67eeb136b2d66bb6f9c7e1b59f5c3e"} Feb 02 10:46:55 crc kubenswrapper[4845]: E0202 10:46:55.343385 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d07b157_761c_4649_ace7_6b9e73636713.slice/crio-conmon-b90648732bdfc02c497925d9fdbbc6781a0df319de0ab6066f55a1ef80c84dcb.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:46:56 crc kubenswrapper[4845]: I0202 10:46:56.187117 4845 generic.go:334] "Generic (PLEG): container finished" podID="2d07b157-761c-4649-ace7-6b9e73636713" containerID="b90648732bdfc02c497925d9fdbbc6781a0df319de0ab6066f55a1ef80c84dcb" exitCode=0 Feb 02 10:46:56 crc kubenswrapper[4845]: I0202 10:46:56.187162 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" event={"ID":"2d07b157-761c-4649-ace7-6b9e73636713","Type":"ContainerDied","Data":"b90648732bdfc02c497925d9fdbbc6781a0df319de0ab6066f55a1ef80c84dcb"} Feb 02 10:46:56 crc kubenswrapper[4845]: I0202 10:46:56.189201 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:46:58 crc kubenswrapper[4845]: I0202 10:46:58.201727 4845 generic.go:334] "Generic (PLEG): container finished" podID="2d07b157-761c-4649-ace7-6b9e73636713" containerID="2b1d98ea217d455458a46c7c0d2afb20f91215b0d3f661d57ccfe2e6f26e5b60" exitCode=0 Feb 02 10:46:58 crc kubenswrapper[4845]: I0202 10:46:58.201819 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" event={"ID":"2d07b157-761c-4649-ace7-6b9e73636713","Type":"ContainerDied","Data":"2b1d98ea217d455458a46c7c0d2afb20f91215b0d3f661d57ccfe2e6f26e5b60"} Feb 02 10:46:59 crc kubenswrapper[4845]: I0202 10:46:59.212712 4845 generic.go:334] "Generic (PLEG): container finished" podID="2d07b157-761c-4649-ace7-6b9e73636713" containerID="b198f234f40d1c00f2b84d737f03fa568d6dd9dd444a43d7dbae3d8ce0b39534" exitCode=0 Feb 02 10:46:59 crc kubenswrapper[4845]: I0202 10:46:59.212756 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" event={"ID":"2d07b157-761c-4649-ace7-6b9e73636713","Type":"ContainerDied","Data":"b198f234f40d1c00f2b84d737f03fa568d6dd9dd444a43d7dbae3d8ce0b39534"} Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.572611 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.599021 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-util\") pod \"2d07b157-761c-4649-ace7-6b9e73636713\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.599252 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-bundle\") pod \"2d07b157-761c-4649-ace7-6b9e73636713\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.599349 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7csnr\" (UniqueName: \"kubernetes.io/projected/2d07b157-761c-4649-ace7-6b9e73636713-kube-api-access-7csnr\") pod \"2d07b157-761c-4649-ace7-6b9e73636713\" (UID: \"2d07b157-761c-4649-ace7-6b9e73636713\") " Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.602298 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-bundle" (OuterVolumeSpecName: "bundle") pod "2d07b157-761c-4649-ace7-6b9e73636713" (UID: "2d07b157-761c-4649-ace7-6b9e73636713"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.613211 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d07b157-761c-4649-ace7-6b9e73636713-kube-api-access-7csnr" (OuterVolumeSpecName: "kube-api-access-7csnr") pod "2d07b157-761c-4649-ace7-6b9e73636713" (UID: "2d07b157-761c-4649-ace7-6b9e73636713"). InnerVolumeSpecName "kube-api-access-7csnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.703253 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7csnr\" (UniqueName: \"kubernetes.io/projected/2d07b157-761c-4649-ace7-6b9e73636713-kube-api-access-7csnr\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.703298 4845 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:00 crc kubenswrapper[4845]: I0202 10:47:00.930968 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-util" (OuterVolumeSpecName: "util") pod "2d07b157-761c-4649-ace7-6b9e73636713" (UID: "2d07b157-761c-4649-ace7-6b9e73636713"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:47:01 crc kubenswrapper[4845]: I0202 10:47:01.007129 4845 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d07b157-761c-4649-ace7-6b9e73636713-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:01 crc kubenswrapper[4845]: I0202 10:47:01.229148 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" event={"ID":"2d07b157-761c-4649-ace7-6b9e73636713","Type":"ContainerDied","Data":"09c9cd81aed13fcd5104531d64e7e1a40b67eeb136b2d66bb6f9c7e1b59f5c3e"} Feb 02 10:47:01 crc kubenswrapper[4845]: I0202 10:47:01.229195 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09c9cd81aed13fcd5104531d64e7e1a40b67eeb136b2d66bb6f9c7e1b59f5c3e" Feb 02 10:47:01 crc kubenswrapper[4845]: I0202 10:47:01.229195 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.821780 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n9cnc"] Feb 02 10:47:03 crc kubenswrapper[4845]: E0202 10:47:03.822372 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d07b157-761c-4649-ace7-6b9e73636713" containerName="pull" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.822386 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d07b157-761c-4649-ace7-6b9e73636713" containerName="pull" Feb 02 10:47:03 crc kubenswrapper[4845]: E0202 10:47:03.822405 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d07b157-761c-4649-ace7-6b9e73636713" containerName="util" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.822413 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d07b157-761c-4649-ace7-6b9e73636713" containerName="util" Feb 02 10:47:03 crc kubenswrapper[4845]: E0202 10:47:03.822423 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d07b157-761c-4649-ace7-6b9e73636713" containerName="extract" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.822431 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d07b157-761c-4649-ace7-6b9e73636713" containerName="extract" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.822586 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d07b157-761c-4649-ace7-6b9e73636713" containerName="extract" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.825143 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.846409 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n9cnc"] Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.948614 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmhm8\" (UniqueName: \"kubernetes.io/projected/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-kube-api-access-mmhm8\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.948724 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-catalog-content\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:03 crc kubenswrapper[4845]: I0202 10:47:03.948755 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-utilities\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.050000 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmhm8\" (UniqueName: \"kubernetes.io/projected/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-kube-api-access-mmhm8\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.050394 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-catalog-content\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.050423 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-utilities\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.050849 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-catalog-content\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.050978 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-utilities\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.073497 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmhm8\" (UniqueName: \"kubernetes.io/projected/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-kube-api-access-mmhm8\") pod \"community-operators-n9cnc\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.142627 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.224946 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fg84s"] Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.226517 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.232483 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fg84s"] Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.252996 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-catalog-content\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.253066 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-utilities\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.253127 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qqjf\" (UniqueName: \"kubernetes.io/projected/45a6b980-b780-4ec9-a2d3-4684981d8d4e-kube-api-access-7qqjf\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.359280 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qqjf\" (UniqueName: \"kubernetes.io/projected/45a6b980-b780-4ec9-a2d3-4684981d8d4e-kube-api-access-7qqjf\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.359653 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-catalog-content\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.359794 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-utilities\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.360486 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-utilities\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.361693 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-catalog-content\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.408456 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qqjf\" (UniqueName: \"kubernetes.io/projected/45a6b980-b780-4ec9-a2d3-4684981d8d4e-kube-api-access-7qqjf\") pod \"certified-operators-fg84s\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.576599 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:04 crc kubenswrapper[4845]: I0202 10:47:04.697368 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n9cnc"] Feb 02 10:47:04 crc kubenswrapper[4845]: W0202 10:47:04.707634 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61fbccca_6f3d_48c6_b052_63b8f73bb8fd.slice/crio-1da0eaa65c059fb15be33cf9aefc7849e6710f13d12947ce4734cbfccbc41cd1 WatchSource:0}: Error finding container 1da0eaa65c059fb15be33cf9aefc7849e6710f13d12947ce4734cbfccbc41cd1: Status 404 returned error can't find the container with id 1da0eaa65c059fb15be33cf9aefc7849e6710f13d12947ce4734cbfccbc41cd1 Feb 02 10:47:05 crc kubenswrapper[4845]: I0202 10:47:05.121121 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fg84s"] Feb 02 10:47:05 crc kubenswrapper[4845]: W0202 10:47:05.126305 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a6b980_b780_4ec9_a2d3_4684981d8d4e.slice/crio-3408ca9957d7cb1deec24b0d6355736505a717add3896628e6c46db5dab8bf77 WatchSource:0}: Error finding container 3408ca9957d7cb1deec24b0d6355736505a717add3896628e6c46db5dab8bf77: Status 404 returned error can't find the container with id 3408ca9957d7cb1deec24b0d6355736505a717add3896628e6c46db5dab8bf77 Feb 02 10:47:05 crc kubenswrapper[4845]: I0202 10:47:05.277183 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg84s" event={"ID":"45a6b980-b780-4ec9-a2d3-4684981d8d4e","Type":"ContainerStarted","Data":"3408ca9957d7cb1deec24b0d6355736505a717add3896628e6c46db5dab8bf77"} Feb 02 10:47:05 crc kubenswrapper[4845]: I0202 10:47:05.279040 4845 generic.go:334] "Generic (PLEG): container finished" podID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerID="4995da76a047f6c49948e06b6d40f72683aaa6ed41b78182d2a96f15171c01f1" exitCode=0 Feb 02 10:47:05 crc kubenswrapper[4845]: I0202 10:47:05.279094 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9cnc" event={"ID":"61fbccca-6f3d-48c6-b052-63b8f73bb8fd","Type":"ContainerDied","Data":"4995da76a047f6c49948e06b6d40f72683aaa6ed41b78182d2a96f15171c01f1"} Feb 02 10:47:05 crc kubenswrapper[4845]: I0202 10:47:05.279128 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9cnc" event={"ID":"61fbccca-6f3d-48c6-b052-63b8f73bb8fd","Type":"ContainerStarted","Data":"1da0eaa65c059fb15be33cf9aefc7849e6710f13d12947ce4734cbfccbc41cd1"} Feb 02 10:47:06 crc kubenswrapper[4845]: I0202 10:47:06.288919 4845 generic.go:334] "Generic (PLEG): container finished" podID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerID="df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63" exitCode=0 Feb 02 10:47:06 crc kubenswrapper[4845]: I0202 10:47:06.289020 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg84s" event={"ID":"45a6b980-b780-4ec9-a2d3-4684981d8d4e","Type":"ContainerDied","Data":"df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63"} Feb 02 10:47:06 crc kubenswrapper[4845]: I0202 10:47:06.293989 4845 generic.go:334] "Generic (PLEG): container finished" podID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerID="5e9b128387b5628d756bc68b8474bbcdad91b25a1b63ae10551ab7d080e9632b" exitCode=0 Feb 02 10:47:06 crc kubenswrapper[4845]: I0202 10:47:06.294050 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9cnc" event={"ID":"61fbccca-6f3d-48c6-b052-63b8f73bb8fd","Type":"ContainerDied","Data":"5e9b128387b5628d756bc68b8474bbcdad91b25a1b63ae10551ab7d080e9632b"} Feb 02 10:47:07 crc kubenswrapper[4845]: I0202 10:47:07.303689 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg84s" event={"ID":"45a6b980-b780-4ec9-a2d3-4684981d8d4e","Type":"ContainerStarted","Data":"b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7"} Feb 02 10:47:07 crc kubenswrapper[4845]: I0202 10:47:07.305910 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9cnc" event={"ID":"61fbccca-6f3d-48c6-b052-63b8f73bb8fd","Type":"ContainerStarted","Data":"328f5b00453f26dbc88c2ec506227c6487985622610eca8cadc30738653842c0"} Feb 02 10:47:07 crc kubenswrapper[4845]: I0202 10:47:07.346262 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n9cnc" podStartSLOduration=2.913642594 podStartE2EDuration="4.346241911s" podCreationTimestamp="2026-02-02 10:47:03 +0000 UTC" firstStartedPulling="2026-02-02 10:47:05.280649052 +0000 UTC m=+906.372050502" lastFinishedPulling="2026-02-02 10:47:06.713248369 +0000 UTC m=+907.804649819" observedRunningTime="2026-02-02 10:47:07.33930265 +0000 UTC m=+908.430704120" watchObservedRunningTime="2026-02-02 10:47:07.346241911 +0000 UTC m=+908.437643361" Feb 02 10:47:08 crc kubenswrapper[4845]: I0202 10:47:08.314504 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg84s" event={"ID":"45a6b980-b780-4ec9-a2d3-4684981d8d4e","Type":"ContainerDied","Data":"b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7"} Feb 02 10:47:08 crc kubenswrapper[4845]: I0202 10:47:08.314394 4845 generic.go:334] "Generic (PLEG): container finished" podID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerID="b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7" exitCode=0 Feb 02 10:47:09 crc kubenswrapper[4845]: I0202 10:47:09.324295 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg84s" event={"ID":"45a6b980-b780-4ec9-a2d3-4684981d8d4e","Type":"ContainerStarted","Data":"fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba"} Feb 02 10:47:09 crc kubenswrapper[4845]: I0202 10:47:09.345980 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fg84s" podStartSLOduration=2.923789457 podStartE2EDuration="5.345961163s" podCreationTimestamp="2026-02-02 10:47:04 +0000 UTC" firstStartedPulling="2026-02-02 10:47:06.29434227 +0000 UTC m=+907.385743720" lastFinishedPulling="2026-02-02 10:47:08.716513976 +0000 UTC m=+909.807915426" observedRunningTime="2026-02-02 10:47:09.342296538 +0000 UTC m=+910.433697988" watchObservedRunningTime="2026-02-02 10:47:09.345961163 +0000 UTC m=+910.437362613" Feb 02 10:47:10 crc kubenswrapper[4845]: I0202 10:47:10.811922 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b5lf6"] Feb 02 10:47:10 crc kubenswrapper[4845]: I0202 10:47:10.813686 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:10 crc kubenswrapper[4845]: I0202 10:47:10.824484 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5lf6"] Feb 02 10:47:10 crc kubenswrapper[4845]: I0202 10:47:10.971918 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-catalog-content\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:10 crc kubenswrapper[4845]: I0202 10:47:10.972005 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-utilities\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:10 crc kubenswrapper[4845]: I0202 10:47:10.972036 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqs2v\" (UniqueName: \"kubernetes.io/projected/63bd6954-f4d1-44ff-9b92-074c115afffc-kube-api-access-fqs2v\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:11 crc kubenswrapper[4845]: I0202 10:47:11.073988 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-catalog-content\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:11 crc kubenswrapper[4845]: I0202 10:47:11.074032 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-utilities\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:11 crc kubenswrapper[4845]: I0202 10:47:11.074051 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqs2v\" (UniqueName: \"kubernetes.io/projected/63bd6954-f4d1-44ff-9b92-074c115afffc-kube-api-access-fqs2v\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:11 crc kubenswrapper[4845]: I0202 10:47:11.074493 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-catalog-content\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:11 crc kubenswrapper[4845]: I0202 10:47:11.074616 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-utilities\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:11 crc kubenswrapper[4845]: I0202 10:47:11.096464 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqs2v\" (UniqueName: \"kubernetes.io/projected/63bd6954-f4d1-44ff-9b92-074c115afffc-kube-api-access-fqs2v\") pod \"redhat-marketplace-b5lf6\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:11 crc kubenswrapper[4845]: I0202 10:47:11.175820 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:11 crc kubenswrapper[4845]: I0202 10:47:11.523160 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5lf6"] Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.349696 4845 generic.go:334] "Generic (PLEG): container finished" podID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerID="3fd971db0c565ac1d8cf02bc0f6981f66795692b2f573fc704b7bb4ba2cf3eef" exitCode=0 Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.349905 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5lf6" event={"ID":"63bd6954-f4d1-44ff-9b92-074c115afffc","Type":"ContainerDied","Data":"3fd971db0c565ac1d8cf02bc0f6981f66795692b2f573fc704b7bb4ba2cf3eef"} Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.350018 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5lf6" event={"ID":"63bd6954-f4d1-44ff-9b92-074c115afffc","Type":"ContainerStarted","Data":"050f1387b1bafaa0eb6a0d18bdb89af0db564a1754a0812c0ed9d00b6642e72b"} Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.680130 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-bcff8566-gkqml"] Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.681453 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.689291 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.689511 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.689718 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.689849 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.700277 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-s8tzj" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.767121 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-bcff8566-gkqml"] Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.799195 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71926ac8-4fc3-41de-8295-01c8ddbb9d27-webhook-cert\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.799301 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71926ac8-4fc3-41de-8295-01c8ddbb9d27-apiservice-cert\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.799323 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfxr\" (UniqueName: \"kubernetes.io/projected/71926ac8-4fc3-41de-8295-01c8ddbb9d27-kube-api-access-pgfxr\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.900706 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71926ac8-4fc3-41de-8295-01c8ddbb9d27-apiservice-cert\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.900741 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgfxr\" (UniqueName: \"kubernetes.io/projected/71926ac8-4fc3-41de-8295-01c8ddbb9d27-kube-api-access-pgfxr\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.900819 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71926ac8-4fc3-41de-8295-01c8ddbb9d27-webhook-cert\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.906346 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71926ac8-4fc3-41de-8295-01c8ddbb9d27-webhook-cert\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.906872 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71926ac8-4fc3-41de-8295-01c8ddbb9d27-apiservice-cert\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.919567 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgfxr\" (UniqueName: \"kubernetes.io/projected/71926ac8-4fc3-41de-8295-01c8ddbb9d27-kube-api-access-pgfxr\") pod \"metallb-operator-controller-manager-bcff8566-gkqml\" (UID: \"71926ac8-4fc3-41de-8295-01c8ddbb9d27\") " pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.951583 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn"] Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.953870 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.956869 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.957967 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.961208 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tc85c" Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.970364 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn"] Feb 02 10:47:12 crc kubenswrapper[4845]: I0202 10:47:12.999556 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.103935 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgqcf\" (UniqueName: \"kubernetes.io/projected/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-kube-api-access-jgqcf\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.104025 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-webhook-cert\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.104065 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-apiservice-cert\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.206855 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgqcf\" (UniqueName: \"kubernetes.io/projected/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-kube-api-access-jgqcf\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.207001 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-webhook-cert\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.207060 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-apiservice-cert\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.216135 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-apiservice-cert\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.216216 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-webhook-cert\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.225584 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgqcf\" (UniqueName: \"kubernetes.io/projected/d2f82fb6-ff9c-4578-8e8c-2bc454b09927-kube-api-access-jgqcf\") pod \"metallb-operator-webhook-server-66c6bb874c-q55bn\" (UID: \"d2f82fb6-ff9c-4578-8e8c-2bc454b09927\") " pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.308350 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.380612 4845 generic.go:334] "Generic (PLEG): container finished" podID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerID="25c9408a498bdfbac908ba7a192f1dcdf10cbbe5bf9963596e569aea99ba57cb" exitCode=0 Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.380653 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5lf6" event={"ID":"63bd6954-f4d1-44ff-9b92-074c115afffc","Type":"ContainerDied","Data":"25c9408a498bdfbac908ba7a192f1dcdf10cbbe5bf9963596e569aea99ba57cb"} Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.587191 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-bcff8566-gkqml"] Feb 02 10:47:13 crc kubenswrapper[4845]: W0202 10:47:13.601003 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71926ac8_4fc3_41de_8295_01c8ddbb9d27.slice/crio-2ec9f47aecbf4d2fd313bd04c10fcc7b70a3bc6ba5a5d33ac1f39ec856ef3bf5 WatchSource:0}: Error finding container 2ec9f47aecbf4d2fd313bd04c10fcc7b70a3bc6ba5a5d33ac1f39ec856ef3bf5: Status 404 returned error can't find the container with id 2ec9f47aecbf4d2fd313bd04c10fcc7b70a3bc6ba5a5d33ac1f39ec856ef3bf5 Feb 02 10:47:13 crc kubenswrapper[4845]: I0202 10:47:13.861654 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn"] Feb 02 10:47:13 crc kubenswrapper[4845]: W0202 10:47:13.867730 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f82fb6_ff9c_4578_8e8c_2bc454b09927.slice/crio-cbbefb99d3c80ef89be4f60ac093af5fdce6e0a78fc0d13e2eaaff7de027a1ae WatchSource:0}: Error finding container cbbefb99d3c80ef89be4f60ac093af5fdce6e0a78fc0d13e2eaaff7de027a1ae: Status 404 returned error can't find the container with id cbbefb99d3c80ef89be4f60ac093af5fdce6e0a78fc0d13e2eaaff7de027a1ae Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.143465 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.143572 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.185733 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.389809 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" event={"ID":"d2f82fb6-ff9c-4578-8e8c-2bc454b09927","Type":"ContainerStarted","Data":"cbbefb99d3c80ef89be4f60ac093af5fdce6e0a78fc0d13e2eaaff7de027a1ae"} Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.390945 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" event={"ID":"71926ac8-4fc3-41de-8295-01c8ddbb9d27","Type":"ContainerStarted","Data":"2ec9f47aecbf4d2fd313bd04c10fcc7b70a3bc6ba5a5d33ac1f39ec856ef3bf5"} Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.393166 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5lf6" event={"ID":"63bd6954-f4d1-44ff-9b92-074c115afffc","Type":"ContainerStarted","Data":"5fb31abd9b4de4f2255ab9a57011ca0b111c55d34d92ac3f568d3966ce45afe3"} Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.443836 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.469003 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b5lf6" podStartSLOduration=3.014943005 podStartE2EDuration="4.46898657s" podCreationTimestamp="2026-02-02 10:47:10 +0000 UTC" firstStartedPulling="2026-02-02 10:47:12.353091186 +0000 UTC m=+913.444492636" lastFinishedPulling="2026-02-02 10:47:13.807134751 +0000 UTC m=+914.898536201" observedRunningTime="2026-02-02 10:47:14.421959424 +0000 UTC m=+915.513360884" watchObservedRunningTime="2026-02-02 10:47:14.46898657 +0000 UTC m=+915.560388020" Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.577547 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.577613 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:14 crc kubenswrapper[4845]: I0202 10:47:14.622340 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:15 crc kubenswrapper[4845]: I0202 10:47:15.495273 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:16 crc kubenswrapper[4845]: I0202 10:47:16.237460 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:47:16 crc kubenswrapper[4845]: I0202 10:47:16.237541 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:47:17 crc kubenswrapper[4845]: I0202 10:47:17.419250 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" event={"ID":"71926ac8-4fc3-41de-8295-01c8ddbb9d27","Type":"ContainerStarted","Data":"f949d5829fbc5e7934f72b74feac3639d58843e290be8dbeeb04e2733d0cea01"} Feb 02 10:47:17 crc kubenswrapper[4845]: I0202 10:47:17.419746 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:18 crc kubenswrapper[4845]: I0202 10:47:18.000326 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" podStartSLOduration=2.708828863 podStartE2EDuration="6.000307793s" podCreationTimestamp="2026-02-02 10:47:12 +0000 UTC" firstStartedPulling="2026-02-02 10:47:13.609119093 +0000 UTC m=+914.700520543" lastFinishedPulling="2026-02-02 10:47:16.900598023 +0000 UTC m=+917.991999473" observedRunningTime="2026-02-02 10:47:17.442383941 +0000 UTC m=+918.533785401" watchObservedRunningTime="2026-02-02 10:47:18.000307793 +0000 UTC m=+919.091709243" Feb 02 10:47:18 crc kubenswrapper[4845]: I0202 10:47:18.016432 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n9cnc"] Feb 02 10:47:18 crc kubenswrapper[4845]: I0202 10:47:18.016670 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n9cnc" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerName="registry-server" containerID="cri-o://328f5b00453f26dbc88c2ec506227c6487985622610eca8cadc30738653842c0" gracePeriod=2 Feb 02 10:47:18 crc kubenswrapper[4845]: I0202 10:47:18.438611 4845 generic.go:334] "Generic (PLEG): container finished" podID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerID="328f5b00453f26dbc88c2ec506227c6487985622610eca8cadc30738653842c0" exitCode=0 Feb 02 10:47:18 crc kubenswrapper[4845]: I0202 10:47:18.438693 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9cnc" event={"ID":"61fbccca-6f3d-48c6-b052-63b8f73bb8fd","Type":"ContainerDied","Data":"328f5b00453f26dbc88c2ec506227c6487985622610eca8cadc30738653842c0"} Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.153786 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.308224 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmhm8\" (UniqueName: \"kubernetes.io/projected/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-kube-api-access-mmhm8\") pod \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.308396 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-utilities\") pod \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.308416 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-catalog-content\") pod \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\" (UID: \"61fbccca-6f3d-48c6-b052-63b8f73bb8fd\") " Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.309367 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-utilities" (OuterVolumeSpecName: "utilities") pod "61fbccca-6f3d-48c6-b052-63b8f73bb8fd" (UID: "61fbccca-6f3d-48c6-b052-63b8f73bb8fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.313657 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-kube-api-access-mmhm8" (OuterVolumeSpecName: "kube-api-access-mmhm8") pod "61fbccca-6f3d-48c6-b052-63b8f73bb8fd" (UID: "61fbccca-6f3d-48c6-b052-63b8f73bb8fd"). InnerVolumeSpecName "kube-api-access-mmhm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.358390 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61fbccca-6f3d-48c6-b052-63b8f73bb8fd" (UID: "61fbccca-6f3d-48c6-b052-63b8f73bb8fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.406267 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fg84s"] Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.406540 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fg84s" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerName="registry-server" containerID="cri-o://fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba" gracePeriod=2 Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.413083 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.413123 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.413139 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmhm8\" (UniqueName: \"kubernetes.io/projected/61fbccca-6f3d-48c6-b052-63b8f73bb8fd-kube-api-access-mmhm8\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.448843 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9cnc" event={"ID":"61fbccca-6f3d-48c6-b052-63b8f73bb8fd","Type":"ContainerDied","Data":"1da0eaa65c059fb15be33cf9aefc7849e6710f13d12947ce4734cbfccbc41cd1"} Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.449905 4845 scope.go:117] "RemoveContainer" containerID="328f5b00453f26dbc88c2ec506227c6487985622610eca8cadc30738653842c0" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.449086 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9cnc" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.467864 4845 scope.go:117] "RemoveContainer" containerID="5e9b128387b5628d756bc68b8474bbcdad91b25a1b63ae10551ab7d080e9632b" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.482228 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n9cnc"] Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.487243 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n9cnc"] Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.498188 4845 scope.go:117] "RemoveContainer" containerID="4995da76a047f6c49948e06b6d40f72683aaa6ed41b78182d2a96f15171c01f1" Feb 02 10:47:19 crc kubenswrapper[4845]: I0202 10:47:19.727513 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" path="/var/lib/kubelet/pods/61fbccca-6f3d-48c6-b052-63b8f73bb8fd/volumes" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.379767 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.463176 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" event={"ID":"d2f82fb6-ff9c-4578-8e8c-2bc454b09927","Type":"ContainerStarted","Data":"6042840959df0f8bfe6fec30c05a75456fc9bc882d4b4b13985926fc73ee637c"} Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.464318 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.466971 4845 generic.go:334] "Generic (PLEG): container finished" podID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerID="fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba" exitCode=0 Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.467014 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg84s" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.467013 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg84s" event={"ID":"45a6b980-b780-4ec9-a2d3-4684981d8d4e","Type":"ContainerDied","Data":"fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba"} Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.467135 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg84s" event={"ID":"45a6b980-b780-4ec9-a2d3-4684981d8d4e","Type":"ContainerDied","Data":"3408ca9957d7cb1deec24b0d6355736505a717add3896628e6c46db5dab8bf77"} Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.467155 4845 scope.go:117] "RemoveContainer" containerID="fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.495483 4845 scope.go:117] "RemoveContainer" containerID="b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.508935 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" podStartSLOduration=3.405865535 podStartE2EDuration="8.508916466s" podCreationTimestamp="2026-02-02 10:47:12 +0000 UTC" firstStartedPulling="2026-02-02 10:47:13.870817127 +0000 UTC m=+914.962218577" lastFinishedPulling="2026-02-02 10:47:18.973868048 +0000 UTC m=+920.065269508" observedRunningTime="2026-02-02 10:47:20.507279349 +0000 UTC m=+921.598680809" watchObservedRunningTime="2026-02-02 10:47:20.508916466 +0000 UTC m=+921.600317916" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.525995 4845 scope.go:117] "RemoveContainer" containerID="df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.538647 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qqjf\" (UniqueName: \"kubernetes.io/projected/45a6b980-b780-4ec9-a2d3-4684981d8d4e-kube-api-access-7qqjf\") pod \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.538724 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-utilities\") pod \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.538859 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-catalog-content\") pod \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\" (UID: \"45a6b980-b780-4ec9-a2d3-4684981d8d4e\") " Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.539585 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-utilities" (OuterVolumeSpecName: "utilities") pod "45a6b980-b780-4ec9-a2d3-4684981d8d4e" (UID: "45a6b980-b780-4ec9-a2d3-4684981d8d4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.539770 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.560039 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a6b980-b780-4ec9-a2d3-4684981d8d4e-kube-api-access-7qqjf" (OuterVolumeSpecName: "kube-api-access-7qqjf") pod "45a6b980-b780-4ec9-a2d3-4684981d8d4e" (UID: "45a6b980-b780-4ec9-a2d3-4684981d8d4e"). InnerVolumeSpecName "kube-api-access-7qqjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.566211 4845 scope.go:117] "RemoveContainer" containerID="fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba" Feb 02 10:47:20 crc kubenswrapper[4845]: E0202 10:47:20.567383 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba\": container with ID starting with fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba not found: ID does not exist" containerID="fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.567444 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba"} err="failed to get container status \"fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba\": rpc error: code = NotFound desc = could not find container \"fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba\": container with ID starting with fbd230db849d847c74dbb0a41b7f17acfbe8042a249c671b2fd77745fa5f64ba not found: ID does not exist" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.567476 4845 scope.go:117] "RemoveContainer" containerID="b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7" Feb 02 10:47:20 crc kubenswrapper[4845]: E0202 10:47:20.567936 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7\": container with ID starting with b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7 not found: ID does not exist" containerID="b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.567961 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7"} err="failed to get container status \"b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7\": rpc error: code = NotFound desc = could not find container \"b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7\": container with ID starting with b0ceb49b8726237f34fa694bf63e6f58856f9d6185fdbb98eeee756180f77bf7 not found: ID does not exist" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.567978 4845 scope.go:117] "RemoveContainer" containerID="df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63" Feb 02 10:47:20 crc kubenswrapper[4845]: E0202 10:47:20.568233 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63\": container with ID starting with df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63 not found: ID does not exist" containerID="df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.568263 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63"} err="failed to get container status \"df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63\": rpc error: code = NotFound desc = could not find container \"df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63\": container with ID starting with df0b246465b706b51c1fcb948f2c4342e4b8b5dcf4ac1ede2bbc2209c23ddd63 not found: ID does not exist" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.610099 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45a6b980-b780-4ec9-a2d3-4684981d8d4e" (UID: "45a6b980-b780-4ec9-a2d3-4684981d8d4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.641502 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qqjf\" (UniqueName: \"kubernetes.io/projected/45a6b980-b780-4ec9-a2d3-4684981d8d4e-kube-api-access-7qqjf\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.641550 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a6b980-b780-4ec9-a2d3-4684981d8d4e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.814062 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fg84s"] Feb 02 10:47:20 crc kubenswrapper[4845]: I0202 10:47:20.820970 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fg84s"] Feb 02 10:47:21 crc kubenswrapper[4845]: I0202 10:47:21.176499 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:21 crc kubenswrapper[4845]: I0202 10:47:21.176556 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:21 crc kubenswrapper[4845]: I0202 10:47:21.216519 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:21 crc kubenswrapper[4845]: I0202 10:47:21.518650 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:21 crc kubenswrapper[4845]: I0202 10:47:21.720701 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" path="/var/lib/kubelet/pods/45a6b980-b780-4ec9-a2d3-4684981d8d4e/volumes" Feb 02 10:47:24 crc kubenswrapper[4845]: I0202 10:47:24.199798 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5lf6"] Feb 02 10:47:24 crc kubenswrapper[4845]: I0202 10:47:24.200293 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b5lf6" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerName="registry-server" containerID="cri-o://5fb31abd9b4de4f2255ab9a57011ca0b111c55d34d92ac3f568d3966ce45afe3" gracePeriod=2 Feb 02 10:47:24 crc kubenswrapper[4845]: I0202 10:47:24.495813 4845 generic.go:334] "Generic (PLEG): container finished" podID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerID="5fb31abd9b4de4f2255ab9a57011ca0b111c55d34d92ac3f568d3966ce45afe3" exitCode=0 Feb 02 10:47:24 crc kubenswrapper[4845]: I0202 10:47:24.495854 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5lf6" event={"ID":"63bd6954-f4d1-44ff-9b92-074c115afffc","Type":"ContainerDied","Data":"5fb31abd9b4de4f2255ab9a57011ca0b111c55d34d92ac3f568d3966ce45afe3"} Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.341879 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.414124 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-catalog-content\") pod \"63bd6954-f4d1-44ff-9b92-074c115afffc\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.414563 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqs2v\" (UniqueName: \"kubernetes.io/projected/63bd6954-f4d1-44ff-9b92-074c115afffc-kube-api-access-fqs2v\") pod \"63bd6954-f4d1-44ff-9b92-074c115afffc\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.414725 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-utilities\") pod \"63bd6954-f4d1-44ff-9b92-074c115afffc\" (UID: \"63bd6954-f4d1-44ff-9b92-074c115afffc\") " Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.415931 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-utilities" (OuterVolumeSpecName: "utilities") pod "63bd6954-f4d1-44ff-9b92-074c115afffc" (UID: "63bd6954-f4d1-44ff-9b92-074c115afffc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.425446 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63bd6954-f4d1-44ff-9b92-074c115afffc-kube-api-access-fqs2v" (OuterVolumeSpecName: "kube-api-access-fqs2v") pod "63bd6954-f4d1-44ff-9b92-074c115afffc" (UID: "63bd6954-f4d1-44ff-9b92-074c115afffc"). InnerVolumeSpecName "kube-api-access-fqs2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.449547 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63bd6954-f4d1-44ff-9b92-074c115afffc" (UID: "63bd6954-f4d1-44ff-9b92-074c115afffc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.512082 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5lf6" event={"ID":"63bd6954-f4d1-44ff-9b92-074c115afffc","Type":"ContainerDied","Data":"050f1387b1bafaa0eb6a0d18bdb89af0db564a1754a0812c0ed9d00b6642e72b"} Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.512174 4845 scope.go:117] "RemoveContainer" containerID="5fb31abd9b4de4f2255ab9a57011ca0b111c55d34d92ac3f568d3966ce45afe3" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.512581 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5lf6" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.516746 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.516777 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bd6954-f4d1-44ff-9b92-074c115afffc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.516789 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqs2v\" (UniqueName: \"kubernetes.io/projected/63bd6954-f4d1-44ff-9b92-074c115afffc-kube-api-access-fqs2v\") on node \"crc\" DevicePath \"\"" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.529999 4845 scope.go:117] "RemoveContainer" containerID="25c9408a498bdfbac908ba7a192f1dcdf10cbbe5bf9963596e569aea99ba57cb" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.550909 4845 scope.go:117] "RemoveContainer" containerID="3fd971db0c565ac1d8cf02bc0f6981f66795692b2f573fc704b7bb4ba2cf3eef" Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.562561 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5lf6"] Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.570773 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5lf6"] Feb 02 10:47:25 crc kubenswrapper[4845]: I0202 10:47:25.723170 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" path="/var/lib/kubelet/pods/63bd6954-f4d1-44ff-9b92-074c115afffc/volumes" Feb 02 10:47:33 crc kubenswrapper[4845]: I0202 10:47:33.312404 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-66c6bb874c-q55bn" Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.238170 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.238482 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.238526 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.239189 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c8a61ef5e1d6c97c545382d55b8a80c690bc952b158b0bc2a66b1f6b33d1ffd"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.239235 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://5c8a61ef5e1d6c97c545382d55b8a80c690bc952b158b0bc2a66b1f6b33d1ffd" gracePeriod=600 Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.667419 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="5c8a61ef5e1d6c97c545382d55b8a80c690bc952b158b0bc2a66b1f6b33d1ffd" exitCode=0 Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.667464 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"5c8a61ef5e1d6c97c545382d55b8a80c690bc952b158b0bc2a66b1f6b33d1ffd"} Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.667838 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"b265cb3810e3935261baf8bbd2287ce4faf34ceae4eb09c4d8144e547b3debd5"} Feb 02 10:47:46 crc kubenswrapper[4845]: I0202 10:47:46.667868 4845 scope.go:117] "RemoveContainer" containerID="faf8e85b5f2efdb91a1dcdfb7d3d9ff033956bb15922ba78cb0d90c0661d34f8" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.011822 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-bcff8566-gkqml" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.695670 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b"] Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696317 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696341 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696363 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerName="extract-utilities" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696373 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerName="extract-utilities" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696417 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerName="extract-content" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696426 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerName="extract-content" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696441 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696449 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696465 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerName="extract-content" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696473 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerName="extract-content" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696489 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerName="extract-utilities" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696499 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerName="extract-utilities" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696521 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerName="extract-utilities" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696530 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerName="extract-utilities" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696551 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696559 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.696585 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerName="extract-content" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696594 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerName="extract-content" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696779 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fbccca-6f3d-48c6-b052-63b8f73bb8fd" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696799 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a6b980-b780-4ec9-a2d3-4684981d8d4e" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.696811 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bd6954-f4d1-44ff-9b92-074c115afffc" containerName="registry-server" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.697725 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.700006 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.700142 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dwpkm" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.707530 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bnlrj"] Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.717374 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.725091 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.728290 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b"] Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.728979 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.783614 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7gchc"] Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.786528 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.789496 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.789787 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-5xjw2" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.789613 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.791429 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.792333 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lmzm\" (UniqueName: \"kubernetes.io/projected/8e70dfea-db96-43f0-82ea-e9342326f82f-kube-api-access-8lmzm\") pod \"frr-k8s-webhook-server-7df86c4f6c-hd78b\" (UID: \"8e70dfea-db96-43f0-82ea-e9342326f82f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.792466 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e70dfea-db96-43f0-82ea-e9342326f82f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-hd78b\" (UID: \"8e70dfea-db96-43f0-82ea-e9342326f82f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.799841 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-pwcrt"] Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.801120 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.802904 4845 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.813580 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-pwcrt"] Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.893717 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-conf\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.893986 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-sockets\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894087 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/572a79b7-a042-4090-afdd-924cdb0f9d3e-metrics-certs\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894171 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-metrics\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894248 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-reloader\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894340 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwnj\" (UniqueName: \"kubernetes.io/projected/572a79b7-a042-4090-afdd-924cdb0f9d3e-kube-api-access-njwnj\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894414 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae8d6393-e53b-4acc-9a90-094d95e29c03-metallb-excludel2\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894587 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e70dfea-db96-43f0-82ea-e9342326f82f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-hd78b\" (UID: \"8e70dfea-db96-43f0-82ea-e9342326f82f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894732 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chlb\" (UniqueName: \"kubernetes.io/projected/64760ce4-85d6-4e58-aa77-99c1ca4d936e-kube-api-access-6chlb\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894853 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64760ce4-85d6-4e58-aa77-99c1ca4d936e-metrics-certs\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894905 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64760ce4-85d6-4e58-aa77-99c1ca4d936e-cert\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894930 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvw5\" (UniqueName: \"kubernetes.io/projected/ae8d6393-e53b-4acc-9a90-094d95e29c03-kube-api-access-qbvw5\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.894957 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lmzm\" (UniqueName: \"kubernetes.io/projected/8e70dfea-db96-43f0-82ea-e9342326f82f-kube-api-access-8lmzm\") pod \"frr-k8s-webhook-server-7df86c4f6c-hd78b\" (UID: \"8e70dfea-db96-43f0-82ea-e9342326f82f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.894865 4845 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.895106 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-startup\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.895310 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e70dfea-db96-43f0-82ea-e9342326f82f-cert podName:8e70dfea-db96-43f0-82ea-e9342326f82f nodeName:}" failed. No retries permitted until 2026-02-02 10:47:54.395176566 +0000 UTC m=+955.486578016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e70dfea-db96-43f0-82ea-e9342326f82f-cert") pod "frr-k8s-webhook-server-7df86c4f6c-hd78b" (UID: "8e70dfea-db96-43f0-82ea-e9342326f82f") : secret "frr-k8s-webhook-server-cert" not found Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.895455 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-metrics-certs\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.895555 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.920622 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lmzm\" (UniqueName: \"kubernetes.io/projected/8e70dfea-db96-43f0-82ea-e9342326f82f-kube-api-access-8lmzm\") pod \"frr-k8s-webhook-server-7df86c4f6c-hd78b\" (UID: \"8e70dfea-db96-43f0-82ea-e9342326f82f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997081 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64760ce4-85d6-4e58-aa77-99c1ca4d936e-cert\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997142 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbvw5\" (UniqueName: \"kubernetes.io/projected/ae8d6393-e53b-4acc-9a90-094d95e29c03-kube-api-access-qbvw5\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997188 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-startup\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997222 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-metrics-certs\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997244 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997269 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-conf\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997294 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-sockets\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997311 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/572a79b7-a042-4090-afdd-924cdb0f9d3e-metrics-certs\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997337 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-metrics\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997395 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-reloader\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997442 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwnj\" (UniqueName: \"kubernetes.io/projected/572a79b7-a042-4090-afdd-924cdb0f9d3e-kube-api-access-njwnj\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997474 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae8d6393-e53b-4acc-9a90-094d95e29c03-metallb-excludel2\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997548 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chlb\" (UniqueName: \"kubernetes.io/projected/64760ce4-85d6-4e58-aa77-99c1ca4d936e-kube-api-access-6chlb\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.997594 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64760ce4-85d6-4e58-aa77-99c1ca4d936e-metrics-certs\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.998488 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-sockets\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.998502 4845 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 10:47:53 crc kubenswrapper[4845]: E0202 10:47:53.998576 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist podName:ae8d6393-e53b-4acc-9a90-094d95e29c03 nodeName:}" failed. No retries permitted until 2026-02-02 10:47:54.498556816 +0000 UTC m=+955.589958356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist") pod "speaker-7gchc" (UID: "ae8d6393-e53b-4acc-9a90-094d95e29c03") : secret "metallb-memberlist" not found Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.998752 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-reloader\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.998792 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-metrics\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.998877 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-conf\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.999142 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae8d6393-e53b-4acc-9a90-094d95e29c03-metallb-excludel2\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:53 crc kubenswrapper[4845]: I0202 10:47:53.999719 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/572a79b7-a042-4090-afdd-924cdb0f9d3e-frr-startup\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.001453 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64760ce4-85d6-4e58-aa77-99c1ca4d936e-metrics-certs\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.002561 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/572a79b7-a042-4090-afdd-924cdb0f9d3e-metrics-certs\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.002614 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-metrics-certs\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.002564 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64760ce4-85d6-4e58-aa77-99c1ca4d936e-cert\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.025770 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chlb\" (UniqueName: \"kubernetes.io/projected/64760ce4-85d6-4e58-aa77-99c1ca4d936e-kube-api-access-6chlb\") pod \"controller-6968d8fdc4-pwcrt\" (UID: \"64760ce4-85d6-4e58-aa77-99c1ca4d936e\") " pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.036591 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbvw5\" (UniqueName: \"kubernetes.io/projected/ae8d6393-e53b-4acc-9a90-094d95e29c03-kube-api-access-qbvw5\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.048554 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwnj\" (UniqueName: \"kubernetes.io/projected/572a79b7-a042-4090-afdd-924cdb0f9d3e-kube-api-access-njwnj\") pod \"frr-k8s-bnlrj\" (UID: \"572a79b7-a042-4090-afdd-924cdb0f9d3e\") " pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.115274 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.332414 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.403083 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e70dfea-db96-43f0-82ea-e9342326f82f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-hd78b\" (UID: \"8e70dfea-db96-43f0-82ea-e9342326f82f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.407742 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e70dfea-db96-43f0-82ea-e9342326f82f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-hd78b\" (UID: \"8e70dfea-db96-43f0-82ea-e9342326f82f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.504879 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:54 crc kubenswrapper[4845]: E0202 10:47:54.505073 4845 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 10:47:54 crc kubenswrapper[4845]: E0202 10:47:54.505162 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist podName:ae8d6393-e53b-4acc-9a90-094d95e29c03 nodeName:}" failed. No retries permitted until 2026-02-02 10:47:55.505143159 +0000 UTC m=+956.596544609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist") pod "speaker-7gchc" (UID: "ae8d6393-e53b-4acc-9a90-094d95e29c03") : secret "metallb-memberlist" not found Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.557185 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-pwcrt"] Feb 02 10:47:54 crc kubenswrapper[4845]: W0202 10:47:54.560641 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64760ce4_85d6_4e58_aa77_99c1ca4d936e.slice/crio-0590a2a7a5a913d456dd8b46e87c22c6723322d20ca62211d00dde8f18125e73 WatchSource:0}: Error finding container 0590a2a7a5a913d456dd8b46e87c22c6723322d20ca62211d00dde8f18125e73: Status 404 returned error can't find the container with id 0590a2a7a5a913d456dd8b46e87c22c6723322d20ca62211d00dde8f18125e73 Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.615159 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.726852 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pwcrt" event={"ID":"64760ce4-85d6-4e58-aa77-99c1ca4d936e","Type":"ContainerStarted","Data":"0f37d84c9ab71c9aff025dd9e38c6fc067a34c9092f676a7ae642d5595c9ec36"} Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.726917 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pwcrt" event={"ID":"64760ce4-85d6-4e58-aa77-99c1ca4d936e","Type":"ContainerStarted","Data":"0590a2a7a5a913d456dd8b46e87c22c6723322d20ca62211d00dde8f18125e73"} Feb 02 10:47:54 crc kubenswrapper[4845]: I0202 10:47:54.728251 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerStarted","Data":"1282d09a6fb51b25a80b5ae1aee4df725d28958f80cf933c202cf826d2971f7c"} Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.034690 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b"] Feb 02 10:47:55 crc kubenswrapper[4845]: W0202 10:47:55.039710 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e70dfea_db96_43f0_82ea_e9342326f82f.slice/crio-16e8987d7ff224e50854a360954e0f743645d7dddb9990ddd21357e90873934b WatchSource:0}: Error finding container 16e8987d7ff224e50854a360954e0f743645d7dddb9990ddd21357e90873934b: Status 404 returned error can't find the container with id 16e8987d7ff224e50854a360954e0f743645d7dddb9990ddd21357e90873934b Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.534875 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.543652 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae8d6393-e53b-4acc-9a90-094d95e29c03-memberlist\") pod \"speaker-7gchc\" (UID: \"ae8d6393-e53b-4acc-9a90-094d95e29c03\") " pod="metallb-system/speaker-7gchc" Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.601427 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7gchc" Feb 02 10:47:55 crc kubenswrapper[4845]: W0202 10:47:55.637942 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8d6393_e53b_4acc_9a90_094d95e29c03.slice/crio-c16768ffae7f824fee36ebde27b3a7d22804da52029443e75293fc9874eb1cfd WatchSource:0}: Error finding container c16768ffae7f824fee36ebde27b3a7d22804da52029443e75293fc9874eb1cfd: Status 404 returned error can't find the container with id c16768ffae7f824fee36ebde27b3a7d22804da52029443e75293fc9874eb1cfd Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.742780 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" event={"ID":"8e70dfea-db96-43f0-82ea-e9342326f82f","Type":"ContainerStarted","Data":"16e8987d7ff224e50854a360954e0f743645d7dddb9990ddd21357e90873934b"} Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.745069 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7gchc" event={"ID":"ae8d6393-e53b-4acc-9a90-094d95e29c03","Type":"ContainerStarted","Data":"c16768ffae7f824fee36ebde27b3a7d22804da52029443e75293fc9874eb1cfd"} Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.747635 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pwcrt" event={"ID":"64760ce4-85d6-4e58-aa77-99c1ca4d936e","Type":"ContainerStarted","Data":"55f8094bf0584bd760bf6934118031442f40abcc685016c69d4a59df82a7cb61"} Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.747826 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:47:55 crc kubenswrapper[4845]: I0202 10:47:55.774374 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-pwcrt" podStartSLOduration=2.774357635 podStartE2EDuration="2.774357635s" podCreationTimestamp="2026-02-02 10:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:47:55.770239456 +0000 UTC m=+956.861640906" watchObservedRunningTime="2026-02-02 10:47:55.774357635 +0000 UTC m=+956.865759085" Feb 02 10:47:56 crc kubenswrapper[4845]: I0202 10:47:56.759383 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7gchc" event={"ID":"ae8d6393-e53b-4acc-9a90-094d95e29c03","Type":"ContainerStarted","Data":"fd5c93be43b68cf9b3886bf7c14e709a87c22b7bdf2d0e54202b05948a4b3757"} Feb 02 10:47:56 crc kubenswrapper[4845]: I0202 10:47:56.759885 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7gchc" event={"ID":"ae8d6393-e53b-4acc-9a90-094d95e29c03","Type":"ContainerStarted","Data":"af65a7560f2d423923aa7523416c3f4785fb0d3601d5b9b1f49139adaad97f48"} Feb 02 10:47:56 crc kubenswrapper[4845]: I0202 10:47:56.778926 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7gchc" podStartSLOduration=3.778910942 podStartE2EDuration="3.778910942s" podCreationTimestamp="2026-02-02 10:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:47:56.775989458 +0000 UTC m=+957.867390908" watchObservedRunningTime="2026-02-02 10:47:56.778910942 +0000 UTC m=+957.870312392" Feb 02 10:47:57 crc kubenswrapper[4845]: I0202 10:47:57.771371 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7gchc" Feb 02 10:48:02 crc kubenswrapper[4845]: I0202 10:48:02.819321 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" event={"ID":"8e70dfea-db96-43f0-82ea-e9342326f82f","Type":"ContainerStarted","Data":"d4db9ffee14ab20682a8c10d4d24a9441bb64177eb56c2beaabbfad5d27f88ea"} Feb 02 10:48:02 crc kubenswrapper[4845]: I0202 10:48:02.821067 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:48:02 crc kubenswrapper[4845]: I0202 10:48:02.822833 4845 generic.go:334] "Generic (PLEG): container finished" podID="572a79b7-a042-4090-afdd-924cdb0f9d3e" containerID="eba3917b86c2ceaa0a457e3fac52b2ee3445d6d144761ce700a4bffb58c1c47b" exitCode=0 Feb 02 10:48:02 crc kubenswrapper[4845]: I0202 10:48:02.822898 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerDied","Data":"eba3917b86c2ceaa0a457e3fac52b2ee3445d6d144761ce700a4bffb58c1c47b"} Feb 02 10:48:02 crc kubenswrapper[4845]: I0202 10:48:02.865489 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" podStartSLOduration=2.872273567 podStartE2EDuration="9.865469082s" podCreationTimestamp="2026-02-02 10:47:53 +0000 UTC" firstStartedPulling="2026-02-02 10:47:55.042073306 +0000 UTC m=+956.133474746" lastFinishedPulling="2026-02-02 10:48:02.035268811 +0000 UTC m=+963.126670261" observedRunningTime="2026-02-02 10:48:02.859925752 +0000 UTC m=+963.951327222" watchObservedRunningTime="2026-02-02 10:48:02.865469082 +0000 UTC m=+963.956870532" Feb 02 10:48:03 crc kubenswrapper[4845]: I0202 10:48:03.835224 4845 generic.go:334] "Generic (PLEG): container finished" podID="572a79b7-a042-4090-afdd-924cdb0f9d3e" containerID="7cba66176bdc444500a8c03d75d83560a5ee92511b8bb153ed816bfd3931c548" exitCode=0 Feb 02 10:48:03 crc kubenswrapper[4845]: I0202 10:48:03.835339 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerDied","Data":"7cba66176bdc444500a8c03d75d83560a5ee92511b8bb153ed816bfd3931c548"} Feb 02 10:48:04 crc kubenswrapper[4845]: I0202 10:48:04.120174 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-pwcrt" Feb 02 10:48:04 crc kubenswrapper[4845]: I0202 10:48:04.850624 4845 generic.go:334] "Generic (PLEG): container finished" podID="572a79b7-a042-4090-afdd-924cdb0f9d3e" containerID="92701bbae76486ceada5b98e913d02e96c3dd644aedb1417e0da59fd7013429f" exitCode=0 Feb 02 10:48:04 crc kubenswrapper[4845]: I0202 10:48:04.850679 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerDied","Data":"92701bbae76486ceada5b98e913d02e96c3dd644aedb1417e0da59fd7013429f"} Feb 02 10:48:05 crc kubenswrapper[4845]: I0202 10:48:05.606130 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7gchc" Feb 02 10:48:05 crc kubenswrapper[4845]: I0202 10:48:05.862263 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerStarted","Data":"37edb9b479622c8c0140e2c2cdac39cb28ecbc733191c5d98f0bd229229d0a91"} Feb 02 10:48:05 crc kubenswrapper[4845]: I0202 10:48:05.862327 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerStarted","Data":"78bede3a13821b7fbd0fb83b38e617d4066b5a02f0265386a212402f1f41b073"} Feb 02 10:48:05 crc kubenswrapper[4845]: I0202 10:48:05.862339 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerStarted","Data":"117266be18c79e94fe17ed2d20b5bba474aafcdfe50bbe1168b07d2ef44090dd"} Feb 02 10:48:05 crc kubenswrapper[4845]: I0202 10:48:05.862350 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerStarted","Data":"30b2f98c8a597d889d21630394d8e21560ba0f8b39df5caea42a1289eff00e1c"} Feb 02 10:48:06 crc kubenswrapper[4845]: I0202 10:48:06.873969 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerStarted","Data":"cded894d172d26689377a2c8eea087c1d79f0fb300715702d9c0300e0ecaf3de"} Feb 02 10:48:06 crc kubenswrapper[4845]: I0202 10:48:06.874223 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:48:06 crc kubenswrapper[4845]: I0202 10:48:06.874235 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bnlrj" event={"ID":"572a79b7-a042-4090-afdd-924cdb0f9d3e","Type":"ContainerStarted","Data":"a6f9d2b4c8188edd880f27710dd6f0a89aa41b60cc3c5c46304a6f4a03b7d86a"} Feb 02 10:48:06 crc kubenswrapper[4845]: I0202 10:48:06.899838 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bnlrj" podStartSLOduration=6.394474408 podStartE2EDuration="13.899815526s" podCreationTimestamp="2026-02-02 10:47:53 +0000 UTC" firstStartedPulling="2026-02-02 10:47:54.49027555 +0000 UTC m=+955.581677000" lastFinishedPulling="2026-02-02 10:48:01.995616668 +0000 UTC m=+963.087018118" observedRunningTime="2026-02-02 10:48:06.893654279 +0000 UTC m=+967.985055749" watchObservedRunningTime="2026-02-02 10:48:06.899815526 +0000 UTC m=+967.991216976" Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.213734 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5mv28"] Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.215041 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5mv28" Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.224236 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jk5k7" Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.224344 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.225033 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.226130 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5mv28"] Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.287731 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gcn9\" (UniqueName: \"kubernetes.io/projected/25e84e3e-ad0d-4488-9be5-eba5934ff498-kube-api-access-4gcn9\") pod \"openstack-operator-index-5mv28\" (UID: \"25e84e3e-ad0d-4488-9be5-eba5934ff498\") " pod="openstack-operators/openstack-operator-index-5mv28" Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.389474 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gcn9\" (UniqueName: \"kubernetes.io/projected/25e84e3e-ad0d-4488-9be5-eba5934ff498-kube-api-access-4gcn9\") pod \"openstack-operator-index-5mv28\" (UID: \"25e84e3e-ad0d-4488-9be5-eba5934ff498\") " pod="openstack-operators/openstack-operator-index-5mv28" Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.406540 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gcn9\" (UniqueName: \"kubernetes.io/projected/25e84e3e-ad0d-4488-9be5-eba5934ff498-kube-api-access-4gcn9\") pod \"openstack-operator-index-5mv28\" (UID: \"25e84e3e-ad0d-4488-9be5-eba5934ff498\") " pod="openstack-operators/openstack-operator-index-5mv28" Feb 02 10:48:08 crc kubenswrapper[4845]: I0202 10:48:08.560838 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5mv28" Feb 02 10:48:09 crc kubenswrapper[4845]: I0202 10:48:09.067372 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5mv28"] Feb 02 10:48:09 crc kubenswrapper[4845]: W0202 10:48:09.077241 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25e84e3e_ad0d_4488_9be5_eba5934ff498.slice/crio-ae6e4d74845b9b32c84b30cb4f0d76b0757673736a4a0c46e58e4f13036c9758 WatchSource:0}: Error finding container ae6e4d74845b9b32c84b30cb4f0d76b0757673736a4a0c46e58e4f13036c9758: Status 404 returned error can't find the container with id ae6e4d74845b9b32c84b30cb4f0d76b0757673736a4a0c46e58e4f13036c9758 Feb 02 10:48:09 crc kubenswrapper[4845]: I0202 10:48:09.333945 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:48:09 crc kubenswrapper[4845]: I0202 10:48:09.370001 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:48:09 crc kubenswrapper[4845]: I0202 10:48:09.899010 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5mv28" event={"ID":"25e84e3e-ad0d-4488-9be5-eba5934ff498","Type":"ContainerStarted","Data":"ae6e4d74845b9b32c84b30cb4f0d76b0757673736a4a0c46e58e4f13036c9758"} Feb 02 10:48:11 crc kubenswrapper[4845]: I0202 10:48:11.584138 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5mv28"] Feb 02 10:48:11 crc kubenswrapper[4845]: I0202 10:48:11.924672 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5mv28" event={"ID":"25e84e3e-ad0d-4488-9be5-eba5934ff498","Type":"ContainerStarted","Data":"027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2"} Feb 02 10:48:11 crc kubenswrapper[4845]: I0202 10:48:11.925706 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-5mv28" podUID="25e84e3e-ad0d-4488-9be5-eba5934ff498" containerName="registry-server" containerID="cri-o://027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2" gracePeriod=2 Feb 02 10:48:11 crc kubenswrapper[4845]: I0202 10:48:11.945052 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5mv28" podStartSLOduration=1.303143055 podStartE2EDuration="3.94503806s" podCreationTimestamp="2026-02-02 10:48:08 +0000 UTC" firstStartedPulling="2026-02-02 10:48:09.079119197 +0000 UTC m=+970.170520647" lastFinishedPulling="2026-02-02 10:48:11.721014202 +0000 UTC m=+972.812415652" observedRunningTime="2026-02-02 10:48:11.944084462 +0000 UTC m=+973.035485922" watchObservedRunningTime="2026-02-02 10:48:11.94503806 +0000 UTC m=+973.036439510" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.200293 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-csz6h"] Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.201753 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.210865 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-csz6h"] Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.250308 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2h6z\" (UniqueName: \"kubernetes.io/projected/153335e1-79de-4c5c-a3cd-2731d0998994-kube-api-access-q2h6z\") pod \"openstack-operator-index-csz6h\" (UID: \"153335e1-79de-4c5c-a3cd-2731d0998994\") " pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.351946 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5mv28_25e84e3e-ad0d-4488-9be5-eba5934ff498/registry-server/0.log" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.352028 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5mv28" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.352056 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2h6z\" (UniqueName: \"kubernetes.io/projected/153335e1-79de-4c5c-a3cd-2731d0998994-kube-api-access-q2h6z\") pod \"openstack-operator-index-csz6h\" (UID: \"153335e1-79de-4c5c-a3cd-2731d0998994\") " pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.379032 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2h6z\" (UniqueName: \"kubernetes.io/projected/153335e1-79de-4c5c-a3cd-2731d0998994-kube-api-access-q2h6z\") pod \"openstack-operator-index-csz6h\" (UID: \"153335e1-79de-4c5c-a3cd-2731d0998994\") " pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.524125 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.555565 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gcn9\" (UniqueName: \"kubernetes.io/projected/25e84e3e-ad0d-4488-9be5-eba5934ff498-kube-api-access-4gcn9\") pod \"25e84e3e-ad0d-4488-9be5-eba5934ff498\" (UID: \"25e84e3e-ad0d-4488-9be5-eba5934ff498\") " Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.561629 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e84e3e-ad0d-4488-9be5-eba5934ff498-kube-api-access-4gcn9" (OuterVolumeSpecName: "kube-api-access-4gcn9") pod "25e84e3e-ad0d-4488-9be5-eba5934ff498" (UID: "25e84e3e-ad0d-4488-9be5-eba5934ff498"). InnerVolumeSpecName "kube-api-access-4gcn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.659970 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gcn9\" (UniqueName: \"kubernetes.io/projected/25e84e3e-ad0d-4488-9be5-eba5934ff498-kube-api-access-4gcn9\") on node \"crc\" DevicePath \"\"" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.925410 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-csz6h"] Feb 02 10:48:12 crc kubenswrapper[4845]: W0202 10:48:12.929068 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod153335e1_79de_4c5c_a3cd_2731d0998994.slice/crio-20ccf78a0ecf8b2b5fb62e705291bf8c64cc0d787fae4343bf78c7297cfa0db1 WatchSource:0}: Error finding container 20ccf78a0ecf8b2b5fb62e705291bf8c64cc0d787fae4343bf78c7297cfa0db1: Status 404 returned error can't find the container with id 20ccf78a0ecf8b2b5fb62e705291bf8c64cc0d787fae4343bf78c7297cfa0db1 Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.937640 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5mv28_25e84e3e-ad0d-4488-9be5-eba5934ff498/registry-server/0.log" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.937694 4845 generic.go:334] "Generic (PLEG): container finished" podID="25e84e3e-ad0d-4488-9be5-eba5934ff498" containerID="027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2" exitCode=2 Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.937728 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5mv28" event={"ID":"25e84e3e-ad0d-4488-9be5-eba5934ff498","Type":"ContainerDied","Data":"027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2"} Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.937745 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5mv28" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.937756 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5mv28" event={"ID":"25e84e3e-ad0d-4488-9be5-eba5934ff498","Type":"ContainerDied","Data":"ae6e4d74845b9b32c84b30cb4f0d76b0757673736a4a0c46e58e4f13036c9758"} Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.937791 4845 scope.go:117] "RemoveContainer" containerID="027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.994096 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5mv28"] Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.997218 4845 scope.go:117] "RemoveContainer" containerID="027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2" Feb 02 10:48:12 crc kubenswrapper[4845]: E0202 10:48:12.997725 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2\": container with ID starting with 027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2 not found: ID does not exist" containerID="027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.997761 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2"} err="failed to get container status \"027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2\": rpc error: code = NotFound desc = could not find container \"027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2\": container with ID starting with 027096357aef1997ef6b1c2ea65438d90bf6c7e0b0474e4fee2d5d6f58b26fe2 not found: ID does not exist" Feb 02 10:48:12 crc kubenswrapper[4845]: I0202 10:48:12.999277 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-5mv28"] Feb 02 10:48:13 crc kubenswrapper[4845]: I0202 10:48:13.723201 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e84e3e-ad0d-4488-9be5-eba5934ff498" path="/var/lib/kubelet/pods/25e84e3e-ad0d-4488-9be5-eba5934ff498/volumes" Feb 02 10:48:13 crc kubenswrapper[4845]: I0202 10:48:13.950404 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-csz6h" event={"ID":"153335e1-79de-4c5c-a3cd-2731d0998994","Type":"ContainerStarted","Data":"e51159131a4297f98cd5e817d82e1b3c0174b4a4cf91d1ed71635b3f457fc652"} Feb 02 10:48:13 crc kubenswrapper[4845]: I0202 10:48:13.951149 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-csz6h" event={"ID":"153335e1-79de-4c5c-a3cd-2731d0998994","Type":"ContainerStarted","Data":"20ccf78a0ecf8b2b5fb62e705291bf8c64cc0d787fae4343bf78c7297cfa0db1"} Feb 02 10:48:13 crc kubenswrapper[4845]: I0202 10:48:13.978744 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-csz6h" podStartSLOduration=1.9118989960000001 podStartE2EDuration="1.978725283s" podCreationTimestamp="2026-02-02 10:48:12 +0000 UTC" firstStartedPulling="2026-02-02 10:48:12.931963868 +0000 UTC m=+974.023365328" lastFinishedPulling="2026-02-02 10:48:12.998790155 +0000 UTC m=+974.090191615" observedRunningTime="2026-02-02 10:48:13.974098839 +0000 UTC m=+975.065500299" watchObservedRunningTime="2026-02-02 10:48:13.978725283 +0000 UTC m=+975.070126743" Feb 02 10:48:14 crc kubenswrapper[4845]: I0202 10:48:14.623282 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-hd78b" Feb 02 10:48:22 crc kubenswrapper[4845]: I0202 10:48:22.524953 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:22 crc kubenswrapper[4845]: I0202 10:48:22.525490 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:22 crc kubenswrapper[4845]: I0202 10:48:22.564879 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:23 crc kubenswrapper[4845]: I0202 10:48:23.065833 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-csz6h" Feb 02 10:48:24 crc kubenswrapper[4845]: I0202 10:48:24.341162 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bnlrj" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.041872 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf"] Feb 02 10:48:29 crc kubenswrapper[4845]: E0202 10:48:29.043009 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e84e3e-ad0d-4488-9be5-eba5934ff498" containerName="registry-server" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.043029 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e84e3e-ad0d-4488-9be5-eba5934ff498" containerName="registry-server" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.043214 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e84e3e-ad0d-4488-9be5-eba5934ff498" containerName="registry-server" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.044730 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.048543 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-z6zzx" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.050316 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf"] Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.156726 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-bundle\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.156856 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ch42\" (UniqueName: \"kubernetes.io/projected/08c56222-38b1-47b8-b554-cc59e503ecf0-kube-api-access-8ch42\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.156982 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-util\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.258939 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ch42\" (UniqueName: \"kubernetes.io/projected/08c56222-38b1-47b8-b554-cc59e503ecf0-kube-api-access-8ch42\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.259099 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-util\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.259157 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-bundle\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.259734 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-util\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.259775 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-bundle\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.293048 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ch42\" (UniqueName: \"kubernetes.io/projected/08c56222-38b1-47b8-b554-cc59e503ecf0-kube-api-access-8ch42\") pod \"a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.375158 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:29 crc kubenswrapper[4845]: I0202 10:48:29.784987 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf"] Feb 02 10:48:29 crc kubenswrapper[4845]: W0202 10:48:29.799173 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08c56222_38b1_47b8_b554_cc59e503ecf0.slice/crio-ff167cc736960a1a975e6b21bf4059352ae59016da5e278f72ff2083ca38b08c WatchSource:0}: Error finding container ff167cc736960a1a975e6b21bf4059352ae59016da5e278f72ff2083ca38b08c: Status 404 returned error can't find the container with id ff167cc736960a1a975e6b21bf4059352ae59016da5e278f72ff2083ca38b08c Feb 02 10:48:30 crc kubenswrapper[4845]: I0202 10:48:30.100195 4845 generic.go:334] "Generic (PLEG): container finished" podID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerID="a42872033bfbe1c357ca946bba1ebae75f69a4321b58131901cbbbd890732208" exitCode=0 Feb 02 10:48:30 crc kubenswrapper[4845]: I0202 10:48:30.100239 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" event={"ID":"08c56222-38b1-47b8-b554-cc59e503ecf0","Type":"ContainerDied","Data":"a42872033bfbe1c357ca946bba1ebae75f69a4321b58131901cbbbd890732208"} Feb 02 10:48:30 crc kubenswrapper[4845]: I0202 10:48:30.100292 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" event={"ID":"08c56222-38b1-47b8-b554-cc59e503ecf0","Type":"ContainerStarted","Data":"ff167cc736960a1a975e6b21bf4059352ae59016da5e278f72ff2083ca38b08c"} Feb 02 10:48:31 crc kubenswrapper[4845]: I0202 10:48:31.111106 4845 generic.go:334] "Generic (PLEG): container finished" podID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerID="6435b21e3a2e68b60ab2550011b6d5a53c97998775b96c5e8d59875d3ef7907c" exitCode=0 Feb 02 10:48:31 crc kubenswrapper[4845]: I0202 10:48:31.111281 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" event={"ID":"08c56222-38b1-47b8-b554-cc59e503ecf0","Type":"ContainerDied","Data":"6435b21e3a2e68b60ab2550011b6d5a53c97998775b96c5e8d59875d3ef7907c"} Feb 02 10:48:32 crc kubenswrapper[4845]: I0202 10:48:32.125059 4845 generic.go:334] "Generic (PLEG): container finished" podID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerID="21f8686de02b943f94e9bed25d53b1f3cad9303746af566197cea4c7dbccf224" exitCode=0 Feb 02 10:48:32 crc kubenswrapper[4845]: I0202 10:48:32.125983 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" event={"ID":"08c56222-38b1-47b8-b554-cc59e503ecf0","Type":"ContainerDied","Data":"21f8686de02b943f94e9bed25d53b1f3cad9303746af566197cea4c7dbccf224"} Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.451248 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.631444 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-bundle\") pod \"08c56222-38b1-47b8-b554-cc59e503ecf0\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.631563 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ch42\" (UniqueName: \"kubernetes.io/projected/08c56222-38b1-47b8-b554-cc59e503ecf0-kube-api-access-8ch42\") pod \"08c56222-38b1-47b8-b554-cc59e503ecf0\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.631811 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-util\") pod \"08c56222-38b1-47b8-b554-cc59e503ecf0\" (UID: \"08c56222-38b1-47b8-b554-cc59e503ecf0\") " Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.632519 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-bundle" (OuterVolumeSpecName: "bundle") pod "08c56222-38b1-47b8-b554-cc59e503ecf0" (UID: "08c56222-38b1-47b8-b554-cc59e503ecf0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.639069 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c56222-38b1-47b8-b554-cc59e503ecf0-kube-api-access-8ch42" (OuterVolumeSpecName: "kube-api-access-8ch42") pod "08c56222-38b1-47b8-b554-cc59e503ecf0" (UID: "08c56222-38b1-47b8-b554-cc59e503ecf0"). InnerVolumeSpecName "kube-api-access-8ch42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.645853 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-util" (OuterVolumeSpecName: "util") pod "08c56222-38b1-47b8-b554-cc59e503ecf0" (UID: "08c56222-38b1-47b8-b554-cc59e503ecf0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.733091 4845 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.733117 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ch42\" (UniqueName: \"kubernetes.io/projected/08c56222-38b1-47b8-b554-cc59e503ecf0-kube-api-access-8ch42\") on node \"crc\" DevicePath \"\"" Feb 02 10:48:33 crc kubenswrapper[4845]: I0202 10:48:33.733126 4845 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/08c56222-38b1-47b8-b554-cc59e503ecf0-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:48:34 crc kubenswrapper[4845]: I0202 10:48:34.145348 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" event={"ID":"08c56222-38b1-47b8-b554-cc59e503ecf0","Type":"ContainerDied","Data":"ff167cc736960a1a975e6b21bf4059352ae59016da5e278f72ff2083ca38b08c"} Feb 02 10:48:34 crc kubenswrapper[4845]: I0202 10:48:34.145414 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff167cc736960a1a975e6b21bf4059352ae59016da5e278f72ff2083ca38b08c" Feb 02 10:48:34 crc kubenswrapper[4845]: I0202 10:48:34.146012 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.730409 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8"] Feb 02 10:48:41 crc kubenswrapper[4845]: E0202 10:48:41.731280 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerName="pull" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.731294 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerName="pull" Feb 02 10:48:41 crc kubenswrapper[4845]: E0202 10:48:41.731308 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerName="util" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.731315 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerName="util" Feb 02 10:48:41 crc kubenswrapper[4845]: E0202 10:48:41.731324 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerName="extract" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.731330 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerName="extract" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.731506 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c56222-38b1-47b8-b554-cc59e503ecf0" containerName="extract" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.732271 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.735948 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-96d9k" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.759699 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8"] Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.882136 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7bsh\" (UniqueName: \"kubernetes.io/projected/e693a9f1-6990-407e-9d01-a23428a6f602-kube-api-access-d7bsh\") pod \"openstack-operator-controller-init-5649bd689f-k5lt8\" (UID: \"e693a9f1-6990-407e-9d01-a23428a6f602\") " pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" Feb 02 10:48:41 crc kubenswrapper[4845]: I0202 10:48:41.984191 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7bsh\" (UniqueName: \"kubernetes.io/projected/e693a9f1-6990-407e-9d01-a23428a6f602-kube-api-access-d7bsh\") pod \"openstack-operator-controller-init-5649bd689f-k5lt8\" (UID: \"e693a9f1-6990-407e-9d01-a23428a6f602\") " pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" Feb 02 10:48:42 crc kubenswrapper[4845]: I0202 10:48:42.004187 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7bsh\" (UniqueName: \"kubernetes.io/projected/e693a9f1-6990-407e-9d01-a23428a6f602-kube-api-access-d7bsh\") pod \"openstack-operator-controller-init-5649bd689f-k5lt8\" (UID: \"e693a9f1-6990-407e-9d01-a23428a6f602\") " pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" Feb 02 10:48:42 crc kubenswrapper[4845]: I0202 10:48:42.054517 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" Feb 02 10:48:42 crc kubenswrapper[4845]: I0202 10:48:42.529301 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8"] Feb 02 10:48:43 crc kubenswrapper[4845]: I0202 10:48:43.276684 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" event={"ID":"e693a9f1-6990-407e-9d01-a23428a6f602","Type":"ContainerStarted","Data":"fdc0e5d437c7da9a195355cf6fb416e18238f17412a4d536ae6c930273c7166d"} Feb 02 10:48:46 crc kubenswrapper[4845]: I0202 10:48:46.300981 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" event={"ID":"e693a9f1-6990-407e-9d01-a23428a6f602","Type":"ContainerStarted","Data":"eac0f8cadc1ccb0712626005360c11ed246f61bf281e6a1c7de2d446f138bb99"} Feb 02 10:48:46 crc kubenswrapper[4845]: I0202 10:48:46.301444 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" Feb 02 10:48:46 crc kubenswrapper[4845]: I0202 10:48:46.337053 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" podStartSLOduration=1.814675334 podStartE2EDuration="5.337035669s" podCreationTimestamp="2026-02-02 10:48:41 +0000 UTC" firstStartedPulling="2026-02-02 10:48:42.551458896 +0000 UTC m=+1003.642860346" lastFinishedPulling="2026-02-02 10:48:46.073819231 +0000 UTC m=+1007.165220681" observedRunningTime="2026-02-02 10:48:46.330638624 +0000 UTC m=+1007.422040074" watchObservedRunningTime="2026-02-02 10:48:46.337035669 +0000 UTC m=+1007.428437119" Feb 02 10:48:52 crc kubenswrapper[4845]: I0202 10:48:52.058101 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5649bd689f-k5lt8" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.722660 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.725384 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.725482 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.726466 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.728283 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.729448 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.731519 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wlznx" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.732250 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-49jxh" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.732601 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tg4bb" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.740506 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.748729 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.758834 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.760471 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.765301 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2tgn8" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.804293 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.811651 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.840878 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvvn\" (UniqueName: \"kubernetes.io/projected/efa2be30-a7d0-4b26-865a-58448de203a0-kube-api-access-wvvvn\") pod \"designate-operator-controller-manager-6d9697b7f4-qfprx\" (UID: \"efa2be30-a7d0-4b26-865a-58448de203a0\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.841485 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlw2\" (UniqueName: \"kubernetes.io/projected/85439e8a-f7d3-4e0b-827c-bf27e8cd53dd-kube-api-access-gmlw2\") pod \"glance-operator-controller-manager-8886f4c47-9c2wv\" (UID: \"85439e8a-f7d3-4e0b-827c-bf27e8cd53dd\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.841815 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zt4\" (UniqueName: \"kubernetes.io/projected/d9196fe1-4a04-44c1-9a5f-1ad5de52da7f-kube-api-access-76zt4\") pod \"cinder-operator-controller-manager-8d874c8fc-c4jdf\" (UID: \"d9196fe1-4a04-44c1-9a5f-1ad5de52da7f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.841917 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvh4s\" (UniqueName: \"kubernetes.io/projected/202de28c-c44a-43d9-98fd-4b34b1dcc65f-kube-api-access-mvh4s\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-cfvq7\" (UID: \"202de28c-c44a-43d9-98fd-4b34b1dcc65f\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.847727 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.848760 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.853229 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qr5xp" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.859916 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.861164 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.864166 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vtcvs" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.865833 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.917270 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.941934 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-m55cc"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.943766 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.944585 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8qvb\" (UniqueName: \"kubernetes.io/projected/745626d8-548b-43bb-aee8-eeab34a86427-kube-api-access-t8qvb\") pod \"heat-operator-controller-manager-69d6db494d-pdfcx\" (UID: \"745626d8-548b-43bb-aee8-eeab34a86427\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.944647 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvvn\" (UniqueName: \"kubernetes.io/projected/efa2be30-a7d0-4b26-865a-58448de203a0-kube-api-access-wvvvn\") pod \"designate-operator-controller-manager-6d9697b7f4-qfprx\" (UID: \"efa2be30-a7d0-4b26-865a-58448de203a0\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.944673 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x7k9\" (UniqueName: \"kubernetes.io/projected/1b72ed0e-9df5-459f-8ca9-de19874a3018-kube-api-access-6x7k9\") pod \"horizon-operator-controller-manager-5fb775575f-msjcj\" (UID: \"1b72ed0e-9df5-459f-8ca9-de19874a3018\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.944698 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmlw2\" (UniqueName: \"kubernetes.io/projected/85439e8a-f7d3-4e0b-827c-bf27e8cd53dd-kube-api-access-gmlw2\") pod \"glance-operator-controller-manager-8886f4c47-9c2wv\" (UID: \"85439e8a-f7d3-4e0b-827c-bf27e8cd53dd\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.944725 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zt4\" (UniqueName: \"kubernetes.io/projected/d9196fe1-4a04-44c1-9a5f-1ad5de52da7f-kube-api-access-76zt4\") pod \"cinder-operator-controller-manager-8d874c8fc-c4jdf\" (UID: \"d9196fe1-4a04-44c1-9a5f-1ad5de52da7f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.944745 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvh4s\" (UniqueName: \"kubernetes.io/projected/202de28c-c44a-43d9-98fd-4b34b1dcc65f-kube-api-access-mvh4s\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-cfvq7\" (UID: \"202de28c-c44a-43d9-98fd-4b34b1dcc65f\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.954954 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.956506 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.956752 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.956931 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dzl6d" Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.979364 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw"] Feb 02 10:49:29 crc kubenswrapper[4845]: I0202 10:49:29.979617 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mvt2n" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.029045 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvvn\" (UniqueName: \"kubernetes.io/projected/efa2be30-a7d0-4b26-865a-58448de203a0-kube-api-access-wvvvn\") pod \"designate-operator-controller-manager-6d9697b7f4-qfprx\" (UID: \"efa2be30-a7d0-4b26-865a-58448de203a0\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.029666 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zt4\" (UniqueName: \"kubernetes.io/projected/d9196fe1-4a04-44c1-9a5f-1ad5de52da7f-kube-api-access-76zt4\") pod \"cinder-operator-controller-manager-8d874c8fc-c4jdf\" (UID: \"d9196fe1-4a04-44c1-9a5f-1ad5de52da7f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.030259 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvh4s\" (UniqueName: \"kubernetes.io/projected/202de28c-c44a-43d9-98fd-4b34b1dcc65f-kube-api-access-mvh4s\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-cfvq7\" (UID: \"202de28c-c44a-43d9-98fd-4b34b1dcc65f\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.031648 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmlw2\" (UniqueName: \"kubernetes.io/projected/85439e8a-f7d3-4e0b-827c-bf27e8cd53dd-kube-api-access-gmlw2\") pod \"glance-operator-controller-manager-8886f4c47-9c2wv\" (UID: \"85439e8a-f7d3-4e0b-827c-bf27e8cd53dd\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.072099 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.072836 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x7k9\" (UniqueName: \"kubernetes.io/projected/1b72ed0e-9df5-459f-8ca9-de19874a3018-kube-api-access-6x7k9\") pod \"horizon-operator-controller-manager-5fb775575f-msjcj\" (UID: \"1b72ed0e-9df5-459f-8ca9-de19874a3018\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.073097 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8qvb\" (UniqueName: \"kubernetes.io/projected/745626d8-548b-43bb-aee8-eeab34a86427-kube-api-access-t8qvb\") pod \"heat-operator-controller-manager-69d6db494d-pdfcx\" (UID: \"745626d8-548b-43bb-aee8-eeab34a86427\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.073195 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.073394 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.122263 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.125935 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.129541 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-m55cc"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.130190 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.131159 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-njkgq" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.131448 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8qvb\" (UniqueName: \"kubernetes.io/projected/745626d8-548b-43bb-aee8-eeab34a86427-kube-api-access-t8qvb\") pod \"heat-operator-controller-manager-69d6db494d-pdfcx\" (UID: \"745626d8-548b-43bb-aee8-eeab34a86427\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.139260 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x7k9\" (UniqueName: \"kubernetes.io/projected/1b72ed0e-9df5-459f-8ca9-de19874a3018-kube-api-access-6x7k9\") pod \"horizon-operator-controller-manager-5fb775575f-msjcj\" (UID: \"1b72ed0e-9df5-459f-8ca9-de19874a3018\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.171746 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.172912 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.176747 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nqqb6" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.176866 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8tn\" (UniqueName: \"kubernetes.io/projected/36101b5e-a4ec-42b8-bb19-1cd2df2897c6-kube-api-access-2r8tn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-bc2xw\" (UID: \"36101b5e-a4ec-42b8-bb19-1cd2df2897c6\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.177024 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlwqw\" (UniqueName: \"kubernetes.io/projected/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-kube-api-access-qlwqw\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.177046 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.188371 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.202992 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.208471 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.216284 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.228485 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.229848 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.242876 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.243977 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.259015 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.259080 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.266278 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.268014 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.273761 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-f84th" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.274028 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dpfq6" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.274188 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.274271 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-f86kq" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.278692 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlwqw\" (UniqueName: \"kubernetes.io/projected/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-kube-api-access-qlwqw\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.278732 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46rz\" (UniqueName: \"kubernetes.io/projected/7cc6d028-e9d2-459c-b34c-d069917832a4-kube-api-access-f46rz\") pod \"keystone-operator-controller-manager-84f48565d4-w55l4\" (UID: \"7cc6d028-e9d2-459c-b34c-d069917832a4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.278754 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.278808 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r8tn\" (UniqueName: \"kubernetes.io/projected/36101b5e-a4ec-42b8-bb19-1cd2df2897c6-kube-api-access-2r8tn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-bc2xw\" (UID: \"36101b5e-a4ec-42b8-bb19-1cd2df2897c6\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.278855 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h5ml\" (UniqueName: \"kubernetes.io/projected/de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d-kube-api-access-4h5ml\") pod \"manila-operator-controller-manager-7dd968899f-plj9z\" (UID: \"de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" Feb 02 10:49:30 crc kubenswrapper[4845]: E0202 10:49:30.279261 4845 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:30 crc kubenswrapper[4845]: E0202 10:49:30.279306 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert podName:f3c02aa0-5039-4a4f-ae11-1bac119f7e31 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:30.779290296 +0000 UTC m=+1051.870691746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert") pod "infra-operator-controller-manager-79955696d6-m55cc" (UID: "f3c02aa0-5039-4a4f-ae11-1bac119f7e31") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.281941 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.287131 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.291462 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bfv5p" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.300514 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlwqw\" (UniqueName: \"kubernetes.io/projected/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-kube-api-access-qlwqw\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.302620 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r8tn\" (UniqueName: \"kubernetes.io/projected/36101b5e-a4ec-42b8-bb19-1cd2df2897c6-kube-api-access-2r8tn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-bc2xw\" (UID: \"36101b5e-a4ec-42b8-bb19-1cd2df2897c6\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.305480 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.313593 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.314667 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.316656 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mr87t" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.324243 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.325487 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.328819 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-q7rbb" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.328985 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.337816 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.339090 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.343861 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7g9q2" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.351631 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.365728 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.380839 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26qh\" (UniqueName: \"kubernetes.io/projected/568bf546-0674-4dbd-91d8-9497c682e368-kube-api-access-v26qh\") pod \"mariadb-operator-controller-manager-67bf948998-t89t9\" (UID: \"568bf546-0674-4dbd-91d8-9497c682e368\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.380900 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h5ml\" (UniqueName: \"kubernetes.io/projected/de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d-kube-api-access-4h5ml\") pod \"manila-operator-controller-manager-7dd968899f-plj9z\" (UID: \"de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.380988 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq5x\" (UniqueName: \"kubernetes.io/projected/a1c4a4d1-3974-47c1-9efc-ee88a38e13a5-kube-api-access-sbq5x\") pod \"neutron-operator-controller-manager-585dbc889-hnt9f\" (UID: \"a1c4a4d1-3974-47c1-9efc-ee88a38e13a5\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.381007 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46rz\" (UniqueName: \"kubernetes.io/projected/7cc6d028-e9d2-459c-b34c-d069917832a4-kube-api-access-f46rz\") pod \"keystone-operator-controller-manager-84f48565d4-w55l4\" (UID: \"7cc6d028-e9d2-459c-b34c-d069917832a4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.381074 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtjjb\" (UniqueName: \"kubernetes.io/projected/cac06f19-af65-481d-b739-68375e8d2968-kube-api-access-jtjjb\") pod \"nova-operator-controller-manager-55bff696bd-p98cd\" (UID: \"cac06f19-af65-481d-b739-68375e8d2968\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.392965 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.398348 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.407718 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h5ml\" (UniqueName: \"kubernetes.io/projected/de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d-kube-api-access-4h5ml\") pod \"manila-operator-controller-manager-7dd968899f-plj9z\" (UID: \"de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.409260 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.410059 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4j54l" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.414050 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46rz\" (UniqueName: \"kubernetes.io/projected/7cc6d028-e9d2-459c-b34c-d069917832a4-kube-api-access-f46rz\") pod \"keystone-operator-controller-manager-84f48565d4-w55l4\" (UID: \"7cc6d028-e9d2-459c-b34c-d069917832a4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.422116 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.423215 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.426674 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-z27q9" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.436709 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.446399 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.468449 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.469831 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.474869 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-pmj6d" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.477519 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.482577 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8rfm\" (UniqueName: \"kubernetes.io/projected/146aa38c-b63c-485a-9c55-006031cfcaa0-kube-api-access-r8rfm\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.482615 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.482640 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wp7\" (UniqueName: \"kubernetes.io/projected/30843195-75a4-4b59-9193-dacda845ace7-kube-api-access-j7wp7\") pod \"octavia-operator-controller-manager-6687f8d877-s97rq\" (UID: \"30843195-75a4-4b59-9193-dacda845ace7\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.482666 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v26qh\" (UniqueName: \"kubernetes.io/projected/568bf546-0674-4dbd-91d8-9497c682e368-kube-api-access-v26qh\") pod \"mariadb-operator-controller-manager-67bf948998-t89t9\" (UID: \"568bf546-0674-4dbd-91d8-9497c682e368\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.482707 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk98v\" (UniqueName: \"kubernetes.io/projected/70403789-9865-4c4d-a969-118a157e564e-kube-api-access-sk98v\") pod \"placement-operator-controller-manager-5b964cf4cd-9ltr5\" (UID: \"70403789-9865-4c4d-a969-118a157e564e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.482746 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47qqg\" (UniqueName: \"kubernetes.io/projected/eae9c104-9193-4404-b25a-3a47932ef374-kube-api-access-47qqg\") pod \"ovn-operator-controller-manager-788c46999f-lt2wj\" (UID: \"eae9c104-9193-4404-b25a-3a47932ef374\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.482803 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbq5x\" (UniqueName: \"kubernetes.io/projected/a1c4a4d1-3974-47c1-9efc-ee88a38e13a5-kube-api-access-sbq5x\") pod \"neutron-operator-controller-manager-585dbc889-hnt9f\" (UID: \"a1c4a4d1-3974-47c1-9efc-ee88a38e13a5\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.482875 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtjjb\" (UniqueName: \"kubernetes.io/projected/cac06f19-af65-481d-b739-68375e8d2968-kube-api-access-jtjjb\") pod \"nova-operator-controller-manager-55bff696bd-p98cd\" (UID: \"cac06f19-af65-481d-b739-68375e8d2968\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.494952 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-mkrpp"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.496021 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.503574 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rqwwx" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.504477 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-mkrpp"] Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.509197 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbq5x\" (UniqueName: \"kubernetes.io/projected/a1c4a4d1-3974-47c1-9efc-ee88a38e13a5-kube-api-access-sbq5x\") pod \"neutron-operator-controller-manager-585dbc889-hnt9f\" (UID: \"a1c4a4d1-3974-47c1-9efc-ee88a38e13a5\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.509687 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v26qh\" (UniqueName: \"kubernetes.io/projected/568bf546-0674-4dbd-91d8-9497c682e368-kube-api-access-v26qh\") pod \"mariadb-operator-controller-manager-67bf948998-t89t9\" (UID: \"568bf546-0674-4dbd-91d8-9497c682e368\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" Feb 02 10:49:30 crc kubenswrapper[4845]: I0202 10:49:30.512371 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtjjb\" (UniqueName: \"kubernetes.io/projected/cac06f19-af65-481d-b739-68375e8d2968-kube-api-access-jtjjb\") pod \"nova-operator-controller-manager-55bff696bd-p98cd\" (UID: \"cac06f19-af65-481d-b739-68375e8d2968\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.093999 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.117999 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.130211 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.130956 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.132717 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.128277 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ng7k\" (UniqueName: \"kubernetes.io/projected/09ccace8-b972-48ae-a15d-ecf88a300105-kube-api-access-9ng7k\") pod \"test-operator-controller-manager-56f8bfcd9f-5f4q4\" (UID: \"09ccace8-b972-48ae-a15d-ecf88a300105\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.157703 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.157794 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvw4x\" (UniqueName: \"kubernetes.io/projected/1cec8fc8-2b7b-4332-92a4-05483486f925-kube-api-access-jvw4x\") pod \"telemetry-operator-controller-manager-6bbb97ddc6-fx4tn\" (UID: \"1cec8fc8-2b7b-4332-92a4-05483486f925\") " pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.157877 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8rfm\" (UniqueName: \"kubernetes.io/projected/146aa38c-b63c-485a-9c55-006031cfcaa0-kube-api-access-r8rfm\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.157925 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.157952 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnjj\" (UniqueName: \"kubernetes.io/projected/817413ef-6c47-47ec-8e08-8dffd27c1e11-kube-api-access-wtnjj\") pod \"swift-operator-controller-manager-68fc8c869-w7bxj\" (UID: \"817413ef-6c47-47ec-8e08-8dffd27c1e11\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.157984 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7wp7\" (UniqueName: \"kubernetes.io/projected/30843195-75a4-4b59-9193-dacda845ace7-kube-api-access-j7wp7\") pod \"octavia-operator-controller-manager-6687f8d877-s97rq\" (UID: \"30843195-75a4-4b59-9193-dacda845ace7\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.158065 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk98v\" (UniqueName: \"kubernetes.io/projected/70403789-9865-4c4d-a969-118a157e564e-kube-api-access-sk98v\") pod \"placement-operator-controller-manager-5b964cf4cd-9ltr5\" (UID: \"70403789-9865-4c4d-a969-118a157e564e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.158110 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47qqg\" (UniqueName: \"kubernetes.io/projected/eae9c104-9193-4404-b25a-3a47932ef374-kube-api-access-47qqg\") pod \"ovn-operator-controller-manager-788c46999f-lt2wj\" (UID: \"eae9c104-9193-4404-b25a-3a47932ef374\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.162975 4845 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.163040 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert podName:f3c02aa0-5039-4a4f-ae11-1bac119f7e31 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:32.163020646 +0000 UTC m=+1053.254422096 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert") pod "infra-operator-controller-manager-79955696d6-m55cc" (UID: "f3c02aa0-5039-4a4f-ae11-1bac119f7e31") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.164453 4845 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.164507 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert podName:146aa38c-b63c-485a-9c55-006031cfcaa0 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:31.664492477 +0000 UTC m=+1052.755893927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" (UID: "146aa38c-b63c-485a-9c55-006031cfcaa0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.167414 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.219718 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8rfm\" (UniqueName: \"kubernetes.io/projected/146aa38c-b63c-485a-9c55-006031cfcaa0-kube-api-access-r8rfm\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.228514 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk98v\" (UniqueName: \"kubernetes.io/projected/70403789-9865-4c4d-a969-118a157e564e-kube-api-access-sk98v\") pod \"placement-operator-controller-manager-5b964cf4cd-9ltr5\" (UID: \"70403789-9865-4c4d-a969-118a157e564e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.236024 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47qqg\" (UniqueName: \"kubernetes.io/projected/eae9c104-9193-4404-b25a-3a47932ef374-kube-api-access-47qqg\") pod \"ovn-operator-controller-manager-788c46999f-lt2wj\" (UID: \"eae9c104-9193-4404-b25a-3a47932ef374\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.249685 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7wp7\" (UniqueName: \"kubernetes.io/projected/30843195-75a4-4b59-9193-dacda845ace7-kube-api-access-j7wp7\") pod \"octavia-operator-controller-manager-6687f8d877-s97rq\" (UID: \"30843195-75a4-4b59-9193-dacda845ace7\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.262539 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnjj\" (UniqueName: \"kubernetes.io/projected/817413ef-6c47-47ec-8e08-8dffd27c1e11-kube-api-access-wtnjj\") pod \"swift-operator-controller-manager-68fc8c869-w7bxj\" (UID: \"817413ef-6c47-47ec-8e08-8dffd27c1e11\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.271493 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zpd\" (UniqueName: \"kubernetes.io/projected/5d4eb1a9-137a-4959-9d37-d81ee9c6dd54-kube-api-access-b6zpd\") pod \"watcher-operator-controller-manager-564965969-mkrpp\" (UID: \"5d4eb1a9-137a-4959-9d37-d81ee9c6dd54\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.271816 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ng7k\" (UniqueName: \"kubernetes.io/projected/09ccace8-b972-48ae-a15d-ecf88a300105-kube-api-access-9ng7k\") pod \"test-operator-controller-manager-56f8bfcd9f-5f4q4\" (UID: \"09ccace8-b972-48ae-a15d-ecf88a300105\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.272060 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvw4x\" (UniqueName: \"kubernetes.io/projected/1cec8fc8-2b7b-4332-92a4-05483486f925-kube-api-access-jvw4x\") pod \"telemetry-operator-controller-manager-6bbb97ddc6-fx4tn\" (UID: \"1cec8fc8-2b7b-4332-92a4-05483486f925\") " pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.313210 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnjj\" (UniqueName: \"kubernetes.io/projected/817413ef-6c47-47ec-8e08-8dffd27c1e11-kube-api-access-wtnjj\") pod \"swift-operator-controller-manager-68fc8c869-w7bxj\" (UID: \"817413ef-6c47-47ec-8e08-8dffd27c1e11\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.324838 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.324914 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvw4x\" (UniqueName: \"kubernetes.io/projected/1cec8fc8-2b7b-4332-92a4-05483486f925-kube-api-access-jvw4x\") pod \"telemetry-operator-controller-manager-6bbb97ddc6-fx4tn\" (UID: \"1cec8fc8-2b7b-4332-92a4-05483486f925\") " pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.328662 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ng7k\" (UniqueName: \"kubernetes.io/projected/09ccace8-b972-48ae-a15d-ecf88a300105-kube-api-access-9ng7k\") pod \"test-operator-controller-manager-56f8bfcd9f-5f4q4\" (UID: \"09ccace8-b972-48ae-a15d-ecf88a300105\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.338456 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.355665 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z"] Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.361315 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.364416 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.372305 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.372644 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zbcwm" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.372818 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.374360 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zpd\" (UniqueName: \"kubernetes.io/projected/5d4eb1a9-137a-4959-9d37-d81ee9c6dd54-kube-api-access-b6zpd\") pod \"watcher-operator-controller-manager-564965969-mkrpp\" (UID: \"5d4eb1a9-137a-4959-9d37-d81ee9c6dd54\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.375762 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.395012 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.398134 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z"] Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.410910 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zpd\" (UniqueName: \"kubernetes.io/projected/5d4eb1a9-137a-4959-9d37-d81ee9c6dd54-kube-api-access-b6zpd\") pod \"watcher-operator-controller-manager-564965969-mkrpp\" (UID: \"5d4eb1a9-137a-4959-9d37-d81ee9c6dd54\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.414856 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.425340 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.451165 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l"] Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.452272 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.457387 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-j652m" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.475936 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.476109 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.476179 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bvrt\" (UniqueName: \"kubernetes.io/projected/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-kube-api-access-7bvrt\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.476207 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4vgz\" (UniqueName: \"kubernetes.io/projected/39f98254-3b87-4ac2-be8c-7d7a0f29d6ce-kube-api-access-f4vgz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z5f9l\" (UID: \"39f98254-3b87-4ac2-be8c-7d7a0f29d6ce\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.494964 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l"] Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.578626 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.578712 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bvrt\" (UniqueName: \"kubernetes.io/projected/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-kube-api-access-7bvrt\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.578743 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4vgz\" (UniqueName: \"kubernetes.io/projected/39f98254-3b87-4ac2-be8c-7d7a0f29d6ce-kube-api-access-f4vgz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z5f9l\" (UID: \"39f98254-3b87-4ac2-be8c-7d7a0f29d6ce\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.578786 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.579010 4845 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.579058 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:32.079041919 +0000 UTC m=+1053.170443369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "webhook-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.579313 4845 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.579340 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:32.079330607 +0000 UTC m=+1053.170732057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "metrics-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.603503 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bvrt\" (UniqueName: \"kubernetes.io/projected/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-kube-api-access-7bvrt\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.616418 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4vgz\" (UniqueName: \"kubernetes.io/projected/39f98254-3b87-4ac2-be8c-7d7a0f29d6ce-kube-api-access-f4vgz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z5f9l\" (UID: \"39f98254-3b87-4ac2-be8c-7d7a0f29d6ce\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.678995 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.680908 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.681091 4845 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: E0202 10:49:31.681152 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert podName:146aa38c-b63c-485a-9c55-006031cfcaa0 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:32.681137966 +0000 UTC m=+1053.772539416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" (UID: "146aa38c-b63c-485a-9c55-006031cfcaa0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:31 crc kubenswrapper[4845]: I0202 10:49:31.818038 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.096985 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.097328 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:32 crc kubenswrapper[4845]: E0202 10:49:32.097154 4845 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:49:32 crc kubenswrapper[4845]: E0202 10:49:32.097459 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:33.097446628 +0000 UTC m=+1054.188848078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "webhook-server-cert" not found Feb 02 10:49:32 crc kubenswrapper[4845]: E0202 10:49:32.097415 4845 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:49:32 crc kubenswrapper[4845]: E0202 10:49:32.097488 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:33.097483129 +0000 UTC m=+1054.188884579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "metrics-server-cert" not found Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.173521 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" event={"ID":"202de28c-c44a-43d9-98fd-4b34b1dcc65f","Type":"ContainerStarted","Data":"f8b4a0f004c178a2aa77dcd6926a200f42d1037c247fa28982e04afad63f5a2d"} Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.198460 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:32 crc kubenswrapper[4845]: E0202 10:49:32.198631 4845 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:32 crc kubenswrapper[4845]: E0202 10:49:32.198688 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert podName:f3c02aa0-5039-4a4f-ae11-1bac119f7e31 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:34.19867192 +0000 UTC m=+1055.290073370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert") pod "infra-operator-controller-manager-79955696d6-m55cc" (UID: "f3c02aa0-5039-4a4f-ae11-1bac119f7e31") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.570208 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.580256 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.593751 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.606443 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.614129 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx"] Feb 02 10:49:32 crc kubenswrapper[4845]: W0202 10:49:32.624206 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85439e8a_f7d3_4e0b_827c_bf27e8cd53dd.slice/crio-3536de7882258acaf9a1249f0cb1acfc21a30d7bc8edc1e088ab6e8c5750e4a9 WatchSource:0}: Error finding container 3536de7882258acaf9a1249f0cb1acfc21a30d7bc8edc1e088ab6e8c5750e4a9: Status 404 returned error can't find the container with id 3536de7882258acaf9a1249f0cb1acfc21a30d7bc8edc1e088ab6e8c5750e4a9 Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.707838 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:32 crc kubenswrapper[4845]: E0202 10:49:32.708329 4845 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:32 crc kubenswrapper[4845]: E0202 10:49:32.708469 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert podName:146aa38c-b63c-485a-9c55-006031cfcaa0 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:34.708454406 +0000 UTC m=+1055.799855856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" (UID: "146aa38c-b63c-485a-9c55-006031cfcaa0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.923308 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.933468 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.941137 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.953063 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5"] Feb 02 10:49:32 crc kubenswrapper[4845]: I0202 10:49:32.963527 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z"] Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.127445 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.127594 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.127584 4845 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.127715 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:35.127697021 +0000 UTC m=+1056.219098471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "metrics-server-cert" not found Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.127642 4845 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.127768 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:35.127755973 +0000 UTC m=+1056.219157423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "webhook-server-cert" not found Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.183855 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" event={"ID":"85439e8a-f7d3-4e0b-827c-bf27e8cd53dd","Type":"ContainerStarted","Data":"3536de7882258acaf9a1249f0cb1acfc21a30d7bc8edc1e088ab6e8c5750e4a9"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.185427 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" event={"ID":"7cc6d028-e9d2-459c-b34c-d069917832a4","Type":"ContainerStarted","Data":"1da94ec7fa8d97577de950d25dbbbead4b863594dcc1dc46e05d40acb827ca01"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.187182 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" event={"ID":"d9196fe1-4a04-44c1-9a5f-1ad5de52da7f","Type":"ContainerStarted","Data":"abcf9ef4634be149814be650705e77d201e4cb1dcd2c144c42fcddd6c090fe3a"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.189480 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" event={"ID":"de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d","Type":"ContainerStarted","Data":"22ad1ac3e37e1a539566f8dabdfb1b6a2e59059917dc8b649710689fa10a33d6"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.190808 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" event={"ID":"cac06f19-af65-481d-b739-68375e8d2968","Type":"ContainerStarted","Data":"5b3cd5202f49cc911c0b5cdffd1a124860a826fdc7cba2403efc0432d7fa7e99"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.191942 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" event={"ID":"745626d8-548b-43bb-aee8-eeab34a86427","Type":"ContainerStarted","Data":"fa43a304dfcd547829c0520a2fb4c37e3936a85dcddcf9a4d61fd7f373e18be9"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.193236 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" event={"ID":"1b72ed0e-9df5-459f-8ca9-de19874a3018","Type":"ContainerStarted","Data":"1c5b24bdeab4cb6b2e01273e2d23112229bac17f3cb9b14f8dbe859478fd1518"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.194474 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" event={"ID":"36101b5e-a4ec-42b8-bb19-1cd2df2897c6","Type":"ContainerStarted","Data":"321f6ef88440cf52334d685e13ed4823ecad5de5b967eb68b8b192fdf85640df"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.195824 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" event={"ID":"efa2be30-a7d0-4b26-865a-58448de203a0","Type":"ContainerStarted","Data":"c91391f9b840e9b6334a5ea84142ea7b0c1d1430f247cee4e42e0a286708b06c"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.196988 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" event={"ID":"70403789-9865-4c4d-a969-118a157e564e","Type":"ContainerStarted","Data":"00e24f6e7e68bf059a0fdb580752617aa2c4ea0c52f7aa11fa4a126f2201d010"} Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.519195 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn"] Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.537029 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f"] Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.550977 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-mkrpp"] Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.578703 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj"] Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.593434 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj"] Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.604238 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9"] Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.604691 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9ng7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-5f4q4_openstack-operators(09ccace8-b972-48ae-a15d-ecf88a300105): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.605039 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:b2c5dab9eea05a087b6cf44f7f86794324fc86bd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvw4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6bbb97ddc6-fx4tn_openstack-operators(1cec8fc8-2b7b-4332-92a4-05483486f925): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.605074 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b6zpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-mkrpp_openstack-operators(5d4eb1a9-137a-4959-9d37-d81ee9c6dd54): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.606237 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" podUID="5d4eb1a9-137a-4959-9d37-d81ee9c6dd54" Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.606299 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" podUID="09ccace8-b972-48ae-a15d-ecf88a300105" Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.606328 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" podUID="1cec8fc8-2b7b-4332-92a4-05483486f925" Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.606531 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4"] Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.608483 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7wp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-s97rq_openstack-operators(30843195-75a4-4b59-9193-dacda845ace7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:49:33 crc kubenswrapper[4845]: E0202 10:49:33.609752 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" podUID="30843195-75a4-4b59-9193-dacda845ace7" Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.612711 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l"] Feb 02 10:49:33 crc kubenswrapper[4845]: I0202 10:49:33.618900 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq"] Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.206471 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" event={"ID":"a1c4a4d1-3974-47c1-9efc-ee88a38e13a5","Type":"ContainerStarted","Data":"ea16e763f815e83f3b4755f02dd90815b54d0664473503229789b51bc66f4f4f"} Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.208256 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" event={"ID":"1cec8fc8-2b7b-4332-92a4-05483486f925","Type":"ContainerStarted","Data":"9adc09b42771a038d2cc7c7ec8b623e5ab5be0ad5f0e65b1430046edae6344bf"} Feb 02 10:49:34 crc kubenswrapper[4845]: E0202 10:49:34.210584 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:b2c5dab9eea05a087b6cf44f7f86794324fc86bd\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" podUID="1cec8fc8-2b7b-4332-92a4-05483486f925" Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.213096 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" event={"ID":"eae9c104-9193-4404-b25a-3a47932ef374","Type":"ContainerStarted","Data":"44a42a8e7fbf3e33a7945a482082f81cae51f7bf0e53fd9f69a6a1851fcc8928"} Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.216123 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" event={"ID":"817413ef-6c47-47ec-8e08-8dffd27c1e11","Type":"ContainerStarted","Data":"ff786ddb469786a1a25136331be442467bb860318b6ff32935fca4585686974d"} Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.218862 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" event={"ID":"09ccace8-b972-48ae-a15d-ecf88a300105","Type":"ContainerStarted","Data":"413b69799994ff2d087741ca276b0aa54e63135cd033861c59502598eea46821"} Feb 02 10:49:34 crc kubenswrapper[4845]: E0202 10:49:34.220696 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" podUID="09ccace8-b972-48ae-a15d-ecf88a300105" Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.227411 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" event={"ID":"568bf546-0674-4dbd-91d8-9497c682e368","Type":"ContainerStarted","Data":"2d6909ab02a4ae9cbe8ec2219c9cd4f1dcfa93de71806f5c84767b9de149ee73"} Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.230096 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" event={"ID":"30843195-75a4-4b59-9193-dacda845ace7","Type":"ContainerStarted","Data":"1575921b56f986bb3b342a7733fd3080f25db04773df72e09ef74224f17fb3b5"} Feb 02 10:49:34 crc kubenswrapper[4845]: E0202 10:49:34.231464 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" podUID="30843195-75a4-4b59-9193-dacda845ace7" Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.232281 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" event={"ID":"5d4eb1a9-137a-4959-9d37-d81ee9c6dd54","Type":"ContainerStarted","Data":"dcdb10030bfbb72a5c1aebce163ff7e15cf200a78777304ca4418deec6dfb48b"} Feb 02 10:49:34 crc kubenswrapper[4845]: E0202 10:49:34.233702 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" podUID="5d4eb1a9-137a-4959-9d37-d81ee9c6dd54" Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.234647 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" event={"ID":"39f98254-3b87-4ac2-be8c-7d7a0f29d6ce","Type":"ContainerStarted","Data":"c84db55a11f73b38ba79690b9609ed7a732d13bb75727416a3ec839932214a0c"} Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.258509 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:34 crc kubenswrapper[4845]: E0202 10:49:34.259336 4845 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:34 crc kubenswrapper[4845]: E0202 10:49:34.259380 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert podName:f3c02aa0-5039-4a4f-ae11-1bac119f7e31 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:38.259365822 +0000 UTC m=+1059.350767272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert") pod "infra-operator-controller-manager-79955696d6-m55cc" (UID: "f3c02aa0-5039-4a4f-ae11-1bac119f7e31") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:34 crc kubenswrapper[4845]: I0202 10:49:34.765904 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:34 crc kubenswrapper[4845]: E0202 10:49:34.766050 4845 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:34 crc kubenswrapper[4845]: E0202 10:49:34.766384 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert podName:146aa38c-b63c-485a-9c55-006031cfcaa0 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:38.766364439 +0000 UTC m=+1059.857765879 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" (UID: "146aa38c-b63c-485a-9c55-006031cfcaa0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:35 crc kubenswrapper[4845]: I0202 10:49:35.173049 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:35 crc kubenswrapper[4845]: I0202 10:49:35.173170 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:35 crc kubenswrapper[4845]: E0202 10:49:35.173269 4845 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:49:35 crc kubenswrapper[4845]: E0202 10:49:35.173331 4845 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:49:35 crc kubenswrapper[4845]: E0202 10:49:35.173348 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:39.173329616 +0000 UTC m=+1060.264731066 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "webhook-server-cert" not found Feb 02 10:49:35 crc kubenswrapper[4845]: E0202 10:49:35.173379 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:39.173364947 +0000 UTC m=+1060.264766397 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "metrics-server-cert" not found Feb 02 10:49:35 crc kubenswrapper[4845]: E0202 10:49:35.244854 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" podUID="30843195-75a4-4b59-9193-dacda845ace7" Feb 02 10:49:35 crc kubenswrapper[4845]: E0202 10:49:35.244966 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" podUID="5d4eb1a9-137a-4959-9d37-d81ee9c6dd54" Feb 02 10:49:35 crc kubenswrapper[4845]: E0202 10:49:35.245176 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" podUID="09ccace8-b972-48ae-a15d-ecf88a300105" Feb 02 10:49:35 crc kubenswrapper[4845]: E0202 10:49:35.246927 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.198:5001/openstack-k8s-operators/telemetry-operator:b2c5dab9eea05a087b6cf44f7f86794324fc86bd\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" podUID="1cec8fc8-2b7b-4332-92a4-05483486f925" Feb 02 10:49:38 crc kubenswrapper[4845]: I0202 10:49:38.329932 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:38 crc kubenswrapper[4845]: E0202 10:49:38.330135 4845 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:38 crc kubenswrapper[4845]: E0202 10:49:38.330449 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert podName:f3c02aa0-5039-4a4f-ae11-1bac119f7e31 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:46.330424522 +0000 UTC m=+1067.421826042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert") pod "infra-operator-controller-manager-79955696d6-m55cc" (UID: "f3c02aa0-5039-4a4f-ae11-1bac119f7e31") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:38 crc kubenswrapper[4845]: I0202 10:49:38.840373 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:38 crc kubenswrapper[4845]: E0202 10:49:38.840540 4845 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:38 crc kubenswrapper[4845]: E0202 10:49:38.840588 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert podName:146aa38c-b63c-485a-9c55-006031cfcaa0 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:46.840573767 +0000 UTC m=+1067.931975217 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" (UID: "146aa38c-b63c-485a-9c55-006031cfcaa0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:49:39 crc kubenswrapper[4845]: I0202 10:49:39.249939 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:39 crc kubenswrapper[4845]: I0202 10:49:39.250140 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:39 crc kubenswrapper[4845]: E0202 10:49:39.250146 4845 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:49:39 crc kubenswrapper[4845]: E0202 10:49:39.250212 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:47.25019319 +0000 UTC m=+1068.341594640 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "webhook-server-cert" not found Feb 02 10:49:39 crc kubenswrapper[4845]: E0202 10:49:39.250372 4845 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:49:39 crc kubenswrapper[4845]: E0202 10:49:39.250469 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:49:47.250448257 +0000 UTC m=+1068.341849757 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "metrics-server-cert" not found Feb 02 10:49:46 crc kubenswrapper[4845]: I0202 10:49:46.237506 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:49:46 crc kubenswrapper[4845]: I0202 10:49:46.239952 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:49:46 crc kubenswrapper[4845]: I0202 10:49:46.396530 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:49:46 crc kubenswrapper[4845]: E0202 10:49:46.396718 4845 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:46 crc kubenswrapper[4845]: E0202 10:49:46.396792 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert podName:f3c02aa0-5039-4a4f-ae11-1bac119f7e31 nodeName:}" failed. No retries permitted until 2026-02-02 10:50:02.396772109 +0000 UTC m=+1083.488173569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert") pod "infra-operator-controller-manager-79955696d6-m55cc" (UID: "f3c02aa0-5039-4a4f-ae11-1bac119f7e31") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:49:46 crc kubenswrapper[4845]: I0202 10:49:46.904794 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:46 crc kubenswrapper[4845]: I0202 10:49:46.910733 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/146aa38c-b63c-485a-9c55-006031cfcaa0-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh\" (UID: \"146aa38c-b63c-485a-9c55-006031cfcaa0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:46 crc kubenswrapper[4845]: I0202 10:49:46.951606 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:49:47 crc kubenswrapper[4845]: E0202 10:49:47.084789 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Feb 02 10:49:47 crc kubenswrapper[4845]: E0202 10:49:47.085066 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6x7k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-msjcj_openstack-operators(1b72ed0e-9df5-459f-8ca9-de19874a3018): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:47 crc kubenswrapper[4845]: E0202 10:49:47.086305 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" podUID="1b72ed0e-9df5-459f-8ca9-de19874a3018" Feb 02 10:49:47 crc kubenswrapper[4845]: I0202 10:49:47.312731 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:47 crc kubenswrapper[4845]: E0202 10:49:47.312965 4845 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:49:47 crc kubenswrapper[4845]: E0202 10:49:47.314109 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:50:03.314085948 +0000 UTC m=+1084.405487408 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "webhook-server-cert" not found Feb 02 10:49:47 crc kubenswrapper[4845]: I0202 10:49:47.314219 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:49:47 crc kubenswrapper[4845]: E0202 10:49:47.314445 4845 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:49:47 crc kubenswrapper[4845]: E0202 10:49:47.314554 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs podName:bd7f3a0c-1bdf-4673-b657-f56e7040f2a1 nodeName:}" failed. No retries permitted until 2026-02-02 10:50:03.314532081 +0000 UTC m=+1084.405933621 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs") pod "openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" (UID: "bd7f3a0c-1bdf-4673-b657-f56e7040f2a1") : secret "metrics-server-cert" not found Feb 02 10:49:47 crc kubenswrapper[4845]: E0202 10:49:47.825104 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" podUID="1b72ed0e-9df5-459f-8ca9-de19874a3018" Feb 02 10:49:48 crc kubenswrapper[4845]: E0202 10:49:48.574787 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Feb 02 10:49:48 crc kubenswrapper[4845]: E0202 10:49:48.575259 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2r8tn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-bc2xw_openstack-operators(36101b5e-a4ec-42b8-bb19-1cd2df2897c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:48 crc kubenswrapper[4845]: E0202 10:49:48.576970 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" podUID="36101b5e-a4ec-42b8-bb19-1cd2df2897c6" Feb 02 10:49:49 crc kubenswrapper[4845]: E0202 10:49:49.367147 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" podUID="36101b5e-a4ec-42b8-bb19-1cd2df2897c6" Feb 02 10:49:50 crc kubenswrapper[4845]: E0202 10:49:50.323853 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Feb 02 10:49:50 crc kubenswrapper[4845]: E0202 10:49:50.324129 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4h5ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-plj9z_openstack-operators(de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:50 crc kubenswrapper[4845]: E0202 10:49:50.325276 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" podUID="de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d" Feb 02 10:49:50 crc kubenswrapper[4845]: E0202 10:49:50.374711 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" podUID="de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d" Feb 02 10:49:50 crc kubenswrapper[4845]: E0202 10:49:50.935445 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Feb 02 10:49:50 crc kubenswrapper[4845]: E0202 10:49:50.935678 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-76zt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-c4jdf_openstack-operators(d9196fe1-4a04-44c1-9a5f-1ad5de52da7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:50 crc kubenswrapper[4845]: E0202 10:49:50.937503 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" podUID="d9196fe1-4a04-44c1-9a5f-1ad5de52da7f" Feb 02 10:49:51 crc kubenswrapper[4845]: E0202 10:49:51.384521 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" podUID="d9196fe1-4a04-44c1-9a5f-1ad5de52da7f" Feb 02 10:49:53 crc kubenswrapper[4845]: E0202 10:49:53.557182 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 02 10:49:53 crc kubenswrapper[4845]: E0202 10:49:53.557945 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v26qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-t89t9_openstack-operators(568bf546-0674-4dbd-91d8-9497c682e368): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:53 crc kubenswrapper[4845]: E0202 10:49:53.559375 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" podUID="568bf546-0674-4dbd-91d8-9497c682e368" Feb 02 10:49:54 crc kubenswrapper[4845]: E0202 10:49:54.405479 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" podUID="568bf546-0674-4dbd-91d8-9497c682e368" Feb 02 10:49:55 crc kubenswrapper[4845]: E0202 10:49:55.364966 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 02 10:49:55 crc kubenswrapper[4845]: E0202 10:49:55.365380 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbq5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-hnt9f_openstack-operators(a1c4a4d1-3974-47c1-9efc-ee88a38e13a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:55 crc kubenswrapper[4845]: E0202 10:49:55.367270 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" podUID="a1c4a4d1-3974-47c1-9efc-ee88a38e13a5" Feb 02 10:49:55 crc kubenswrapper[4845]: E0202 10:49:55.413822 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" podUID="a1c4a4d1-3974-47c1-9efc-ee88a38e13a5" Feb 02 10:49:55 crc kubenswrapper[4845]: E0202 10:49:55.894898 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Feb 02 10:49:55 crc kubenswrapper[4845]: E0202 10:49:55.895081 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtnjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-w7bxj_openstack-operators(817413ef-6c47-47ec-8e08-8dffd27c1e11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:55 crc kubenswrapper[4845]: E0202 10:49:55.897097 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" podUID="817413ef-6c47-47ec-8e08-8dffd27c1e11" Feb 02 10:49:56 crc kubenswrapper[4845]: E0202 10:49:56.292030 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 02 10:49:56 crc kubenswrapper[4845]: E0202 10:49:56.292205 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f46rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-w55l4_openstack-operators(7cc6d028-e9d2-459c-b34c-d069917832a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:56 crc kubenswrapper[4845]: E0202 10:49:56.293444 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" podUID="7cc6d028-e9d2-459c-b34c-d069917832a4" Feb 02 10:49:56 crc kubenswrapper[4845]: E0202 10:49:56.421021 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" podUID="7cc6d028-e9d2-459c-b34c-d069917832a4" Feb 02 10:49:56 crc kubenswrapper[4845]: E0202 10:49:56.421639 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" podUID="817413ef-6c47-47ec-8e08-8dffd27c1e11" Feb 02 10:49:56 crc kubenswrapper[4845]: E0202 10:49:56.816577 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 02 10:49:56 crc kubenswrapper[4845]: E0202 10:49:56.816748 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtjjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-p98cd_openstack-operators(cac06f19-af65-481d-b739-68375e8d2968): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:56 crc kubenswrapper[4845]: E0202 10:49:56.818219 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" podUID="cac06f19-af65-481d-b739-68375e8d2968" Feb 02 10:49:57 crc kubenswrapper[4845]: E0202 10:49:57.429349 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" podUID="cac06f19-af65-481d-b739-68375e8d2968" Feb 02 10:49:59 crc kubenswrapper[4845]: E0202 10:49:59.934648 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 02 10:49:59 crc kubenswrapper[4845]: E0202 10:49:59.935401 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4vgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-z5f9l_openstack-operators(39f98254-3b87-4ac2-be8c-7d7a0f29d6ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:49:59 crc kubenswrapper[4845]: E0202 10:49:59.936716 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" podUID="39f98254-3b87-4ac2-be8c-7d7a0f29d6ce" Feb 02 10:50:00 crc kubenswrapper[4845]: E0202 10:50:00.473526 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" podUID="39f98254-3b87-4ac2-be8c-7d7a0f29d6ce" Feb 02 10:50:02 crc kubenswrapper[4845]: I0202 10:50:02.424397 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:50:02 crc kubenswrapper[4845]: I0202 10:50:02.434929 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3c02aa0-5039-4a4f-ae11-1bac119f7e31-cert\") pod \"infra-operator-controller-manager-79955696d6-m55cc\" (UID: \"f3c02aa0-5039-4a4f-ae11-1bac119f7e31\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:50:02 crc kubenswrapper[4845]: I0202 10:50:02.490752 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" event={"ID":"202de28c-c44a-43d9-98fd-4b34b1dcc65f","Type":"ContainerStarted","Data":"6ea86470b646489dc04ee02411af87253bbf6fae406231373de24e474b5d3f44"} Feb 02 10:50:02 crc kubenswrapper[4845]: I0202 10:50:02.490946 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" Feb 02 10:50:02 crc kubenswrapper[4845]: I0202 10:50:02.508118 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" podStartSLOduration=11.810277426 podStartE2EDuration="33.508099168s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:31.848876989 +0000 UTC m=+1052.940278439" lastFinishedPulling="2026-02-02 10:49:53.546698731 +0000 UTC m=+1074.638100181" observedRunningTime="2026-02-02 10:50:02.505427832 +0000 UTC m=+1083.596829292" watchObservedRunningTime="2026-02-02 10:50:02.508099168 +0000 UTC m=+1083.599500618" Feb 02 10:50:02 crc kubenswrapper[4845]: I0202 10:50:02.674707 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dzl6d" Feb 02 10:50:02 crc kubenswrapper[4845]: I0202 10:50:02.675587 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh"] Feb 02 10:50:02 crc kubenswrapper[4845]: I0202 10:50:02.682995 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:50:02 crc kubenswrapper[4845]: W0202 10:50:02.701260 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod146aa38c_b63c_485a_9c55_006031cfcaa0.slice/crio-0cf5e906c1bdf92cea6afcc0300e1de4d45ece67b02a2b70b0748f4f2f9a38d2 WatchSource:0}: Error finding container 0cf5e906c1bdf92cea6afcc0300e1de4d45ece67b02a2b70b0748f4f2f9a38d2: Status 404 returned error can't find the container with id 0cf5e906c1bdf92cea6afcc0300e1de4d45ece67b02a2b70b0748f4f2f9a38d2 Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.310008 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-m55cc"] Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.368040 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.368193 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.377817 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-webhook-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.381530 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd7f3a0c-1bdf-4673-b657-f56e7040f2a1-metrics-certs\") pod \"openstack-operator-controller-manager-5b7c7bb6c9-k6h5z\" (UID: \"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1\") " pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.464255 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zbcwm" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.470984 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.523495 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" event={"ID":"1cec8fc8-2b7b-4332-92a4-05483486f925","Type":"ContainerStarted","Data":"2f113d242413af7ec6c418077c89369d62767a51df2767c705c532067a561d01"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.524461 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.531678 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" event={"ID":"eae9c104-9193-4404-b25a-3a47932ef374","Type":"ContainerStarted","Data":"722d8884135133c8161e049437f8b1919dd1c02f37c836f83c85d7181e1e5c15"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.533033 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.534345 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" event={"ID":"85439e8a-f7d3-4e0b-827c-bf27e8cd53dd","Type":"ContainerStarted","Data":"bcf6c338cc6514e6540f97f7b0fbadc7fe5a207eb9a4ab07a2c9f7ef00d555e1"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.534873 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.535951 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" event={"ID":"146aa38c-b63c-485a-9c55-006031cfcaa0","Type":"ContainerStarted","Data":"0cf5e906c1bdf92cea6afcc0300e1de4d45ece67b02a2b70b0748f4f2f9a38d2"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.556691 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" podStartSLOduration=4.807866013 podStartE2EDuration="33.556668009s" podCreationTimestamp="2026-02-02 10:49:30 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.604811803 +0000 UTC m=+1054.696213253" lastFinishedPulling="2026-02-02 10:50:02.353613799 +0000 UTC m=+1083.445015249" observedRunningTime="2026-02-02 10:50:03.53655443 +0000 UTC m=+1084.627955880" watchObservedRunningTime="2026-02-02 10:50:03.556668009 +0000 UTC m=+1084.648069459" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.574027 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" event={"ID":"efa2be30-a7d0-4b26-865a-58448de203a0","Type":"ContainerStarted","Data":"5db06f0e6d4fe53fcc1bdab4ffdf8dc864fb27e5d461e801d85921f70ab549f5"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.574075 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.598202 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" event={"ID":"30843195-75a4-4b59-9193-dacda845ace7","Type":"ContainerStarted","Data":"32eefedddb314cb1cfb917f22e3903c5cc7aa7bf14546aa3765816e2ba63c4bd"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.599203 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.599291 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" podStartSLOduration=11.393228334 podStartE2EDuration="34.599272814s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.600643405 +0000 UTC m=+1054.692044855" lastFinishedPulling="2026-02-02 10:49:56.806687865 +0000 UTC m=+1077.898089335" observedRunningTime="2026-02-02 10:50:03.56837312 +0000 UTC m=+1084.659774580" watchObservedRunningTime="2026-02-02 10:50:03.599272814 +0000 UTC m=+1084.690674264" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.635906 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" event={"ID":"70403789-9865-4c4d-a969-118a157e564e","Type":"ContainerStarted","Data":"26054c7430719219d22c442c133c930866732ad30b24f2f446ce5c1e3c070d8b"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.636323 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.638124 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" event={"ID":"de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d","Type":"ContainerStarted","Data":"75966db573248fbe4b0b90602ebe3726e8063b21b0e0a26857b5738c057756e3"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.638693 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" podStartSLOduration=11.916782337 podStartE2EDuration="34.636337181s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.631999734 +0000 UTC m=+1053.723401194" lastFinishedPulling="2026-02-02 10:49:55.351554578 +0000 UTC m=+1076.442956038" observedRunningTime="2026-02-02 10:50:03.599262693 +0000 UTC m=+1084.690664143" watchObservedRunningTime="2026-02-02 10:50:03.636337181 +0000 UTC m=+1084.727738631" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.638961 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.640361 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" event={"ID":"745626d8-548b-43bb-aee8-eeab34a86427","Type":"ContainerStarted","Data":"f6b1058879fd6329016e9b172d3d8474ffc4d2332bfdd3cf26b371af7171f2ab"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.640632 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.643484 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" event={"ID":"09ccace8-b972-48ae-a15d-ecf88a300105","Type":"ContainerStarted","Data":"6430dec9a81641179f6b9d383f39a0f49c4e16e4e9e0b4fd782ee73143f0dbba"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.643722 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.670473 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" podStartSLOduration=10.530877596 podStartE2EDuration="34.670455785s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.667090816 +0000 UTC m=+1053.758492266" lastFinishedPulling="2026-02-02 10:49:56.806669005 +0000 UTC m=+1077.898070455" observedRunningTime="2026-02-02 10:50:03.62495043 +0000 UTC m=+1084.716351880" watchObservedRunningTime="2026-02-02 10:50:03.670455785 +0000 UTC m=+1084.761857235" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.686665 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" event={"ID":"1b72ed0e-9df5-459f-8ca9-de19874a3018","Type":"ContainerStarted","Data":"cdaa6d538e1d17c85f1173a2b704e4222ea1b6ebb6ce68cb7469fe9f403b4e11"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.687652 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.688450 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" podStartSLOduration=6.11226639 podStartE2EDuration="34.688423894s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.608321092 +0000 UTC m=+1054.699722542" lastFinishedPulling="2026-02-02 10:50:02.184478566 +0000 UTC m=+1083.275880046" observedRunningTime="2026-02-02 10:50:03.660439692 +0000 UTC m=+1084.751841152" watchObservedRunningTime="2026-02-02 10:50:03.688423894 +0000 UTC m=+1084.779825344" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.692104 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" podStartSLOduration=5.065203791 podStartE2EDuration="34.692095787s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.933254523 +0000 UTC m=+1054.024655973" lastFinishedPulling="2026-02-02 10:50:02.560146519 +0000 UTC m=+1083.651547969" observedRunningTime="2026-02-02 10:50:03.686287373 +0000 UTC m=+1084.777688833" watchObservedRunningTime="2026-02-02 10:50:03.692095787 +0000 UTC m=+1084.783497237" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.693615 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" event={"ID":"f3c02aa0-5039-4a4f-ae11-1bac119f7e31","Type":"ContainerStarted","Data":"59895553f141e9b65908c907a5522db71611bbf9d034da48cd98009d96aa7289"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.713094 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" event={"ID":"5d4eb1a9-137a-4959-9d37-d81ee9c6dd54","Type":"ContainerStarted","Data":"0062025f5074397dfd62fa8e706a8a04e9775f3ffecbed011e7c507f92d33c05"} Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.713581 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.720160 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" podStartSLOduration=5.11652135 podStartE2EDuration="33.720145651s" podCreationTimestamp="2026-02-02 10:49:30 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.604431222 +0000 UTC m=+1054.695832672" lastFinishedPulling="2026-02-02 10:50:02.208055523 +0000 UTC m=+1083.299456973" observedRunningTime="2026-02-02 10:50:03.719369659 +0000 UTC m=+1084.810771129" watchObservedRunningTime="2026-02-02 10:50:03.720145651 +0000 UTC m=+1084.811547101" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.784920 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" podStartSLOduration=6.821314207 podStartE2EDuration="33.784903642s" podCreationTimestamp="2026-02-02 10:49:30 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.940326583 +0000 UTC m=+1054.031728033" lastFinishedPulling="2026-02-02 10:49:59.903916008 +0000 UTC m=+1080.995317468" observedRunningTime="2026-02-02 10:50:03.751383974 +0000 UTC m=+1084.842785424" watchObservedRunningTime="2026-02-02 10:50:03.784903642 +0000 UTC m=+1084.876305092" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.795871 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" podStartSLOduration=11.146524454 podStartE2EDuration="34.795853481s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.624270985 +0000 UTC m=+1053.715672435" lastFinishedPulling="2026-02-02 10:49:56.273600012 +0000 UTC m=+1077.365001462" observedRunningTime="2026-02-02 10:50:03.784170341 +0000 UTC m=+1084.875571791" watchObservedRunningTime="2026-02-02 10:50:03.795853481 +0000 UTC m=+1084.887254931" Feb 02 10:50:03 crc kubenswrapper[4845]: I0202 10:50:03.815289 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" podStartSLOduration=5.146939411 podStartE2EDuration="33.815273281s" podCreationTimestamp="2026-02-02 10:49:30 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.604878875 +0000 UTC m=+1054.696280325" lastFinishedPulling="2026-02-02 10:50:02.273212745 +0000 UTC m=+1083.364614195" observedRunningTime="2026-02-02 10:50:03.810103514 +0000 UTC m=+1084.901504964" watchObservedRunningTime="2026-02-02 10:50:03.815273281 +0000 UTC m=+1084.906674731" Feb 02 10:50:04 crc kubenswrapper[4845]: I0202 10:50:04.291191 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" podStartSLOduration=5.581081659 podStartE2EDuration="35.291174118s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.610178337 +0000 UTC m=+1053.701579787" lastFinishedPulling="2026-02-02 10:50:02.320270796 +0000 UTC m=+1083.411672246" observedRunningTime="2026-02-02 10:50:03.852299348 +0000 UTC m=+1084.943700788" watchObservedRunningTime="2026-02-02 10:50:04.291174118 +0000 UTC m=+1085.382575568" Feb 02 10:50:04 crc kubenswrapper[4845]: I0202 10:50:04.296197 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z"] Feb 02 10:50:04 crc kubenswrapper[4845]: I0202 10:50:04.727708 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" event={"ID":"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1","Type":"ContainerStarted","Data":"0ee7e50f4b1d8bbbf13b392ab35249694e3d1477fb23b99ab648741c80100e2d"} Feb 02 10:50:04 crc kubenswrapper[4845]: I0202 10:50:04.728109 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" event={"ID":"bd7f3a0c-1bdf-4673-b657-f56e7040f2a1","Type":"ContainerStarted","Data":"6b013b6451099395f51bd0340e59ce991874f6a2891e01de35eda48d8b26c07a"} Feb 02 10:50:04 crc kubenswrapper[4845]: I0202 10:50:04.790724 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" podStartSLOduration=33.790701433 podStartE2EDuration="33.790701433s" podCreationTimestamp="2026-02-02 10:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:50:04.773552258 +0000 UTC m=+1085.864953728" watchObservedRunningTime="2026-02-02 10:50:04.790701433 +0000 UTC m=+1085.882102893" Feb 02 10:50:05 crc kubenswrapper[4845]: I0202 10:50:05.754536 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.771251 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" event={"ID":"36101b5e-a4ec-42b8-bb19-1cd2df2897c6","Type":"ContainerStarted","Data":"8aab7c1a27a4d6c2760f415001de7ce65703efc6049cf580d2ccec9c43d8aef1"} Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.773054 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.774467 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" event={"ID":"f3c02aa0-5039-4a4f-ae11-1bac119f7e31","Type":"ContainerStarted","Data":"5652d6caff502b45b75b50cd13c6239a51e6558a6efdac0f0a5130c1f95e25f9"} Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.774919 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.779524 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" event={"ID":"146aa38c-b63c-485a-9c55-006031cfcaa0","Type":"ContainerStarted","Data":"8bf080431198d6bb12c1543f2c99d84aea51dd066ee236dd12512578e886c92c"} Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.780388 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.786285 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" event={"ID":"d9196fe1-4a04-44c1-9a5f-1ad5de52da7f","Type":"ContainerStarted","Data":"ec879a488fbfb2d1d0ce4a5f06c9d0da79178eb12f2240b90633f9f7c93ab0fe"} Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.786709 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.794767 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" podStartSLOduration=4.33490603 podStartE2EDuration="37.794747483s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.931357509 +0000 UTC m=+1054.022758959" lastFinishedPulling="2026-02-02 10:50:06.391198962 +0000 UTC m=+1087.482600412" observedRunningTime="2026-02-02 10:50:06.791319416 +0000 UTC m=+1087.882720866" watchObservedRunningTime="2026-02-02 10:50:06.794747483 +0000 UTC m=+1087.886148933" Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.820401 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" podStartSLOduration=34.136847117 podStartE2EDuration="37.820381138s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:50:02.70727918 +0000 UTC m=+1083.798680630" lastFinishedPulling="2026-02-02 10:50:06.390813201 +0000 UTC m=+1087.482214651" observedRunningTime="2026-02-02 10:50:06.815179641 +0000 UTC m=+1087.906581091" watchObservedRunningTime="2026-02-02 10:50:06.820381138 +0000 UTC m=+1087.911782588" Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.834954 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" podStartSLOduration=4.079619072 podStartE2EDuration="37.83493552s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.662452665 +0000 UTC m=+1053.753854115" lastFinishedPulling="2026-02-02 10:50:06.417769073 +0000 UTC m=+1087.509170563" observedRunningTime="2026-02-02 10:50:06.832639055 +0000 UTC m=+1087.924040545" watchObservedRunningTime="2026-02-02 10:50:06.83493552 +0000 UTC m=+1087.926336970" Feb 02 10:50:06 crc kubenswrapper[4845]: I0202 10:50:06.848620 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" podStartSLOduration=34.76500265 podStartE2EDuration="37.848602026s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:50:03.311180527 +0000 UTC m=+1084.402581977" lastFinishedPulling="2026-02-02 10:50:06.394779893 +0000 UTC m=+1087.486181353" observedRunningTime="2026-02-02 10:50:06.846338352 +0000 UTC m=+1087.937739812" watchObservedRunningTime="2026-02-02 10:50:06.848602026 +0000 UTC m=+1087.940003476" Feb 02 10:50:08 crc kubenswrapper[4845]: I0202 10:50:08.799614 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" event={"ID":"817413ef-6c47-47ec-8e08-8dffd27c1e11","Type":"ContainerStarted","Data":"181f746d2bf3ca1d10c7c0ff5cf39c881cdca0bb5824121e7ba211ea2cf8c2ab"} Feb 02 10:50:08 crc kubenswrapper[4845]: I0202 10:50:08.800185 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" Feb 02 10:50:08 crc kubenswrapper[4845]: I0202 10:50:08.819095 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" podStartSLOduration=4.68471048 podStartE2EDuration="38.819077276s" podCreationTimestamp="2026-02-02 10:49:30 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.600960504 +0000 UTC m=+1054.692361954" lastFinishedPulling="2026-02-02 10:50:07.73532729 +0000 UTC m=+1088.826728750" observedRunningTime="2026-02-02 10:50:08.816802232 +0000 UTC m=+1089.908203682" watchObservedRunningTime="2026-02-02 10:50:08.819077276 +0000 UTC m=+1089.910478726" Feb 02 10:50:09 crc kubenswrapper[4845]: I0202 10:50:09.813098 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" event={"ID":"568bf546-0674-4dbd-91d8-9497c682e368","Type":"ContainerStarted","Data":"da34a8fd9d9d0955f4723dec5a28ea7662aa005abfa7f4b862c1db7965ad802a"} Feb 02 10:50:09 crc kubenswrapper[4845]: I0202 10:50:09.814194 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" Feb 02 10:50:09 crc kubenswrapper[4845]: I0202 10:50:09.817015 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" event={"ID":"a1c4a4d1-3974-47c1-9efc-ee88a38e13a5","Type":"ContainerStarted","Data":"0dc33da4c4f0c611ced8f2c0713bcb03cd29cc6170b20f4b4bd2712e93200b15"} Feb 02 10:50:09 crc kubenswrapper[4845]: I0202 10:50:09.817355 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" Feb 02 10:50:09 crc kubenswrapper[4845]: I0202 10:50:09.833545 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" podStartSLOduration=5.01070095 podStartE2EDuration="40.833528162s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.529171754 +0000 UTC m=+1054.620573204" lastFinishedPulling="2026-02-02 10:50:09.351998976 +0000 UTC m=+1090.443400416" observedRunningTime="2026-02-02 10:50:09.828675615 +0000 UTC m=+1090.920077085" watchObservedRunningTime="2026-02-02 10:50:09.833528162 +0000 UTC m=+1090.924929612" Feb 02 10:50:09 crc kubenswrapper[4845]: I0202 10:50:09.846758 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" podStartSLOduration=5.17123841 podStartE2EDuration="40.846739276s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.611145202 +0000 UTC m=+1054.702546652" lastFinishedPulling="2026-02-02 10:50:09.286646068 +0000 UTC m=+1090.378047518" observedRunningTime="2026-02-02 10:50:09.845374877 +0000 UTC m=+1090.936776347" watchObservedRunningTime="2026-02-02 10:50:09.846739276 +0000 UTC m=+1090.938140726" Feb 02 10:50:10 crc kubenswrapper[4845]: I0202 10:50:10.079473 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-cfvq7" Feb 02 10:50:10 crc kubenswrapper[4845]: I0202 10:50:10.129407 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-qfprx" Feb 02 10:50:10 crc kubenswrapper[4845]: I0202 10:50:10.133898 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-9c2wv" Feb 02 10:50:10 crc kubenswrapper[4845]: I0202 10:50:10.192219 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pdfcx" Feb 02 10:50:10 crc kubenswrapper[4845]: I0202 10:50:10.221230 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-msjcj" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.133534 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-plj9z" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.136007 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-bc2xw" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.330709 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-s97rq" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.343417 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lt2wj" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.366747 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ltr5" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.400724 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6bbb97ddc6-fx4tn" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.420759 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5f4q4" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.436864 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-mkrpp" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.834073 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" event={"ID":"7cc6d028-e9d2-459c-b34c-d069917832a4","Type":"ContainerStarted","Data":"3b139b41d46c8bbff04fc6dca47a8f78e5d94e56ca37cc12fd705c09fed358dc"} Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.834326 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.835977 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" event={"ID":"cac06f19-af65-481d-b739-68375e8d2968","Type":"ContainerStarted","Data":"a681fbd60d79135f7f92fdd0abea76e3d9dced97e68f5dc30128161d225211cd"} Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.836670 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.861559 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" podStartSLOduration=5.114107413 podStartE2EDuration="42.861538239s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.941374502 +0000 UTC m=+1054.032775952" lastFinishedPulling="2026-02-02 10:50:10.688805328 +0000 UTC m=+1091.780206778" observedRunningTime="2026-02-02 10:50:11.856075704 +0000 UTC m=+1092.947477164" watchObservedRunningTime="2026-02-02 10:50:11.861538239 +0000 UTC m=+1092.952939689" Feb 02 10:50:11 crc kubenswrapper[4845]: I0202 10:50:11.882072 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" podStartSLOduration=5.0390303 podStartE2EDuration="42.882054739s" podCreationTimestamp="2026-02-02 10:49:29 +0000 UTC" firstStartedPulling="2026-02-02 10:49:32.933951082 +0000 UTC m=+1054.025352532" lastFinishedPulling="2026-02-02 10:50:10.776975521 +0000 UTC m=+1091.868376971" observedRunningTime="2026-02-02 10:50:11.874532336 +0000 UTC m=+1092.965933786" watchObservedRunningTime="2026-02-02 10:50:11.882054739 +0000 UTC m=+1092.973456189" Feb 02 10:50:12 crc kubenswrapper[4845]: I0202 10:50:12.693201 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m55cc" Feb 02 10:50:13 crc kubenswrapper[4845]: I0202 10:50:13.476754 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b7c7bb6c9-k6h5z" Feb 02 10:50:15 crc kubenswrapper[4845]: I0202 10:50:15.873383 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" event={"ID":"39f98254-3b87-4ac2-be8c-7d7a0f29d6ce","Type":"ContainerStarted","Data":"30dd9ec48d8e6cce67dc128578c6cea6011738707c4cedc9b481031750c549ad"} Feb 02 10:50:15 crc kubenswrapper[4845]: I0202 10:50:15.898380 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z5f9l" podStartSLOduration=3.331175683 podStartE2EDuration="44.89835342s" podCreationTimestamp="2026-02-02 10:49:31 +0000 UTC" firstStartedPulling="2026-02-02 10:49:33.601262963 +0000 UTC m=+1054.692664413" lastFinishedPulling="2026-02-02 10:50:15.1684407 +0000 UTC m=+1096.259842150" observedRunningTime="2026-02-02 10:50:15.893629627 +0000 UTC m=+1096.985031077" watchObservedRunningTime="2026-02-02 10:50:15.89835342 +0000 UTC m=+1096.989754870" Feb 02 10:50:16 crc kubenswrapper[4845]: I0202 10:50:16.238168 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:50:16 crc kubenswrapper[4845]: I0202 10:50:16.238619 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:50:16 crc kubenswrapper[4845]: I0202 10:50:16.959674 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh" Feb 02 10:50:20 crc kubenswrapper[4845]: I0202 10:50:20.077458 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-c4jdf" Feb 02 10:50:21 crc kubenswrapper[4845]: I0202 10:50:21.100346 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w55l4" Feb 02 10:50:21 crc kubenswrapper[4845]: I0202 10:50:21.126352 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-p98cd" Feb 02 10:50:21 crc kubenswrapper[4845]: I0202 10:50:21.140220 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t89t9" Feb 02 10:50:21 crc kubenswrapper[4845]: I0202 10:50:21.151123 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-hnt9f" Feb 02 10:50:21 crc kubenswrapper[4845]: I0202 10:50:21.378704 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7bxj" Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.870241 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j9fpl"] Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.872362 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.875543 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.875757 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.875860 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.875993 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5d6j8" Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.885975 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j9fpl"] Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.926439 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pxsn7"] Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.928404 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.930350 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 10:50:39 crc kubenswrapper[4845]: I0202 10:50:39.945259 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pxsn7"] Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.019648 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.019757 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ff22c8-ded6-4209-9503-f1e66526c1d5-config\") pod \"dnsmasq-dns-675f4bcbfc-j9fpl\" (UID: \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.019808 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-config\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.020035 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6dgn\" (UniqueName: \"kubernetes.io/projected/a5ff22c8-ded6-4209-9503-f1e66526c1d5-kube-api-access-h6dgn\") pod \"dnsmasq-dns-675f4bcbfc-j9fpl\" (UID: \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.020111 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxqg\" (UniqueName: \"kubernetes.io/projected/b15a2eeb-8248-40e0-b9a6-294ed99f1177-kube-api-access-cwxqg\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.122190 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ff22c8-ded6-4209-9503-f1e66526c1d5-config\") pod \"dnsmasq-dns-675f4bcbfc-j9fpl\" (UID: \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.122278 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-config\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.122389 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6dgn\" (UniqueName: \"kubernetes.io/projected/a5ff22c8-ded6-4209-9503-f1e66526c1d5-kube-api-access-h6dgn\") pod \"dnsmasq-dns-675f4bcbfc-j9fpl\" (UID: \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.122430 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxqg\" (UniqueName: \"kubernetes.io/projected/b15a2eeb-8248-40e0-b9a6-294ed99f1177-kube-api-access-cwxqg\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.122487 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.123134 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-config\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.123161 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ff22c8-ded6-4209-9503-f1e66526c1d5-config\") pod \"dnsmasq-dns-675f4bcbfc-j9fpl\" (UID: \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.123638 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.141406 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6dgn\" (UniqueName: \"kubernetes.io/projected/a5ff22c8-ded6-4209-9503-f1e66526c1d5-kube-api-access-h6dgn\") pod \"dnsmasq-dns-675f4bcbfc-j9fpl\" (UID: \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.145646 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxqg\" (UniqueName: \"kubernetes.io/projected/b15a2eeb-8248-40e0-b9a6-294ed99f1177-kube-api-access-cwxqg\") pod \"dnsmasq-dns-78dd6ddcc-pxsn7\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.207602 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.250541 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.755247 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pxsn7"] Feb 02 10:50:40 crc kubenswrapper[4845]: I0202 10:50:40.849315 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j9fpl"] Feb 02 10:50:40 crc kubenswrapper[4845]: W0202 10:50:40.854717 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5ff22c8_ded6_4209_9503_f1e66526c1d5.slice/crio-3feed3c8637a37b93c2f596a1883e1905a851bfbc1c2fe28022e86b8367a3d6d WatchSource:0}: Error finding container 3feed3c8637a37b93c2f596a1883e1905a851bfbc1c2fe28022e86b8367a3d6d: Status 404 returned error can't find the container with id 3feed3c8637a37b93c2f596a1883e1905a851bfbc1c2fe28022e86b8367a3d6d Feb 02 10:50:41 crc kubenswrapper[4845]: I0202 10:50:41.092999 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" event={"ID":"b15a2eeb-8248-40e0-b9a6-294ed99f1177","Type":"ContainerStarted","Data":"541d01d5e724cfd983149b2a3df13b41fdb7aca180b15b693908ce4201c3363e"} Feb 02 10:50:41 crc kubenswrapper[4845]: I0202 10:50:41.095004 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" event={"ID":"a5ff22c8-ded6-4209-9503-f1e66526c1d5","Type":"ContainerStarted","Data":"3feed3c8637a37b93c2f596a1883e1905a851bfbc1c2fe28022e86b8367a3d6d"} Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.661591 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j9fpl"] Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.684640 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d26cn"] Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.687772 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.704387 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d26cn"] Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.779875 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-config\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.780084 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.780162 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ktfp\" (UniqueName: \"kubernetes.io/projected/511adc55-f919-42e9-961d-94565550d668-kube-api-access-4ktfp\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.882624 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ktfp\" (UniqueName: \"kubernetes.io/projected/511adc55-f919-42e9-961d-94565550d668-kube-api-access-4ktfp\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.883065 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-config\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.883109 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.883951 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-dns-svc\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.884175 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-config\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.936084 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ktfp\" (UniqueName: \"kubernetes.io/projected/511adc55-f919-42e9-961d-94565550d668-kube-api-access-4ktfp\") pod \"dnsmasq-dns-666b6646f7-d26cn\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:42 crc kubenswrapper[4845]: I0202 10:50:42.998628 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pxsn7"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.013601 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.025654 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w59pq"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.027352 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.046853 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w59pq"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.191693 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.191804 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-config\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.191986 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq6c6\" (UniqueName: \"kubernetes.io/projected/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-kube-api-access-gq6c6\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.294460 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq6c6\" (UniqueName: \"kubernetes.io/projected/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-kube-api-access-gq6c6\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.294867 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.294940 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-config\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.295875 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-config\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.295946 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.315808 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq6c6\" (UniqueName: \"kubernetes.io/projected/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-kube-api-access-gq6c6\") pod \"dnsmasq-dns-57d769cc4f-w59pq\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.360875 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.584687 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d26cn"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.813432 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.815810 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.818601 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.818678 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.818706 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.818646 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.818734 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.819294 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tclgq" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.820005 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.841287 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.857437 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.859724 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.865537 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.867331 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.877099 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 02 10:50:43 crc kubenswrapper[4845]: W0202 10:50:43.883820 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d63dd57_08d9_4913_b1d3_36a9c8b5db2e.slice/crio-7586b993b7c8e47124f3e9c3b1a81c730d51fb7fd5b1683f7e203dc16fdb11b3 WatchSource:0}: Error finding container 7586b993b7c8e47124f3e9c3b1a81c730d51fb7fd5b1683f7e203dc16fdb11b3: Status 404 returned error can't find the container with id 7586b993b7c8e47124f3e9c3b1a81c730d51fb7fd5b1683f7e203dc16fdb11b3 Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933371 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933456 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933490 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933555 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933594 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-17dd126b-742a-421d-90d5-fabe0fddb7cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17dd126b-742a-421d-90d5-fabe0fddb7cb\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933665 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933726 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rstjm\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-kube-api-access-rstjm\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933799 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933905 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.933966 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.934040 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-config-data\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.937195 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 02 10:50:43 crc kubenswrapper[4845]: I0202 10:50:43.952594 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w59pq"] Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.035592 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.035668 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.035698 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hdt\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-kube-api-access-w6hdt\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.035836 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bfbea8aa-e192-4e9b-8928-4b9078218059\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfbea8aa-e192-4e9b-8928-4b9078218059\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.035944 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036003 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036027 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a61fa08e-868a-4415-88d5-7ed0eebbeb45-pod-info\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036060 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036083 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036106 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz27x\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-kube-api-access-mz27x\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036143 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036213 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036255 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036290 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036324 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-17dd126b-742a-421d-90d5-fabe0fddb7cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17dd126b-742a-421d-90d5-fabe0fddb7cb\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036375 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0a3a285-364a-4df2-8a7c-947ff673f254-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036422 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036453 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-server-conf\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036481 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036527 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rstjm\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-kube-api-access-rstjm\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036556 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0a3a285-364a-4df2-8a7c-947ff673f254-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036587 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-config-data\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036621 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036658 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a61fa08e-868a-4415-88d5-7ed0eebbeb45-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036694 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036727 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036767 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036808 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036845 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036873 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.036909 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8f3f8a22-543f-47e7-b070-4309ca65d7c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f3f8a22-543f-47e7-b070-4309ca65d7c4\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.038468 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.038768 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.039772 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.040571 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-config-data\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.040630 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-config-data\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.041467 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-config-data\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.041657 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.041981 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.042023 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.042024 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-17dd126b-742a-421d-90d5-fabe0fddb7cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17dd126b-742a-421d-90d5-fabe0fddb7cb\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3b6438ecda997c67dcc63770328ff6a865176e2ea3582236dd879581a55b9845/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.043595 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.048643 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.052690 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rstjm\" (UniqueName: \"kubernetes.io/projected/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-kube-api-access-rstjm\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.052766 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e45ad6a-20f4-4da2-82b7-500ed29a0cd5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.084638 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-17dd126b-742a-421d-90d5-fabe0fddb7cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17dd126b-742a-421d-90d5-fabe0fddb7cb\") pod \"rabbitmq-server-0\" (UID: \"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5\") " pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.141939 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142316 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hdt\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-kube-api-access-w6hdt\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142341 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bfbea8aa-e192-4e9b-8928-4b9078218059\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfbea8aa-e192-4e9b-8928-4b9078218059\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142381 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142400 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a61fa08e-868a-4415-88d5-7ed0eebbeb45-pod-info\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142423 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142440 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz27x\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-kube-api-access-mz27x\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142475 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142503 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142533 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0a3a285-364a-4df2-8a7c-947ff673f254-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142563 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-server-conf\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142582 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142608 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0a3a285-364a-4df2-8a7c-947ff673f254-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142630 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-config-data\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142651 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142683 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a61fa08e-868a-4415-88d5-7ed0eebbeb45-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142712 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142745 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142773 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142813 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8f3f8a22-543f-47e7-b070-4309ca65d7c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f3f8a22-543f-47e7-b070-4309ca65d7c4\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142847 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-config-data\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.142878 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.143404 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.148414 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-config-data\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.154595 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.154679 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.154915 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.162645 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.163416 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.163951 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0a3a285-364a-4df2-8a7c-947ff673f254-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.167206 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.167629 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" event={"ID":"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e","Type":"ContainerStarted","Data":"7586b993b7c8e47124f3e9c3b1a81c730d51fb7fd5b1683f7e203dc16fdb11b3"} Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.172145 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0a3a285-364a-4df2-8a7c-947ff673f254-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.172381 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.174561 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.174593 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bfbea8aa-e192-4e9b-8928-4b9078218059\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfbea8aa-e192-4e9b-8928-4b9078218059\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3c75a6c07d29ad3f6c197fd99c75e4b482cc857102eab05479c9e43ddaa56a8/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.175011 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.176025 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-server-conf\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.176908 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.177525 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.177551 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.177679 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.177707 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8f3f8a22-543f-47e7-b070-4309ca65d7c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f3f8a22-543f-47e7-b070-4309ca65d7c4\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7877f86ceeec28f55dcdfdad53738fad251a2adbd26577d424dd55d7064b7272/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.177764 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a61fa08e-868a-4415-88d5-7ed0eebbeb45-config-data\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.178371 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a61fa08e-868a-4415-88d5-7ed0eebbeb45-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.180182 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0a3a285-364a-4df2-8a7c-947ff673f254-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.183565 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" event={"ID":"511adc55-f919-42e9-961d-94565550d668","Type":"ContainerStarted","Data":"d2ea1511585c52b6d62ad84745c6c0e0adbcd9ee5f53b0b69c86b9217ec09f38"} Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.184038 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-84mvx" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.184064 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.183869 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.184098 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.184058 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.184193 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.185364 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.187100 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.190017 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.192357 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hdt\" (UniqueName: \"kubernetes.io/projected/a61fa08e-868a-4415-88d5-7ed0eebbeb45-kube-api-access-w6hdt\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.195138 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz27x\" (UniqueName: \"kubernetes.io/projected/d0a3a285-364a-4df2-8a7c-947ff673f254-kube-api-access-mz27x\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.196444 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a61fa08e-868a-4415-88d5-7ed0eebbeb45-pod-info\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.219175 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8f3f8a22-543f-47e7-b070-4309ca65d7c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8f3f8a22-543f-47e7-b070-4309ca65d7c4\") pod \"rabbitmq-server-1\" (UID: \"d0a3a285-364a-4df2-8a7c-947ff673f254\") " pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.239482 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bfbea8aa-e192-4e9b-8928-4b9078218059\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfbea8aa-e192-4e9b-8928-4b9078218059\") pod \"rabbitmq-server-2\" (UID: \"a61fa08e-868a-4415-88d5-7ed0eebbeb45\") " pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.244570 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.244688 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.244752 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.244770 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.244845 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgdrd\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-kube-api-access-sgdrd\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.244970 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.245055 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.245080 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.245130 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.245256 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-160bfd2f-7d1e-4a3e-9cce-2c08e4673be2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-160bfd2f-7d1e-4a3e-9cce-2c08e4673be2\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.245341 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.253622 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349274 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349349 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349426 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349478 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-160bfd2f-7d1e-4a3e-9cce-2c08e4673be2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-160bfd2f-7d1e-4a3e-9cce-2c08e4673be2\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349502 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349587 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349616 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349658 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.349681 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.350731 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgdrd\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-kube-api-access-sgdrd\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.350854 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.354642 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.355031 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.356748 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.357423 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.357461 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.364611 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.364608 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.365027 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.367632 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.368522 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.368577 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-160bfd2f-7d1e-4a3e-9cce-2c08e4673be2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-160bfd2f-7d1e-4a3e-9cce-2c08e4673be2\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b2eb781001d93de0acf363dfef7c5efded5167520c7db69b373afc490c63ac37/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.396773 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgdrd\" (UniqueName: \"kubernetes.io/projected/70739f91-4fde-4bc2-b4e1-5bdb7cb0426c-kube-api-access-sgdrd\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.458475 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-160bfd2f-7d1e-4a3e-9cce-2c08e4673be2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-160bfd2f-7d1e-4a3e-9cce-2c08e4673be2\") pod \"rabbitmq-cell1-server-0\" (UID: \"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.513507 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.612563 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:50:44 crc kubenswrapper[4845]: I0202 10:50:44.882839 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:50:45 crc kubenswrapper[4845]: W0202 10:50:45.010621 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda61fa08e_868a_4415_88d5_7ed0eebbeb45.slice/crio-7388896d8c8e466f6fceaf6796e3bd83cb406ea71301a12e5ae9b626081454c5 WatchSource:0}: Error finding container 7388896d8c8e466f6fceaf6796e3bd83cb406ea71301a12e5ae9b626081454c5: Status 404 returned error can't find the container with id 7388896d8c8e466f6fceaf6796e3bd83cb406ea71301a12e5ae9b626081454c5 Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.012446 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.138422 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 02 10:50:45 crc kubenswrapper[4845]: W0202 10:50:45.148668 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0a3a285_364a_4df2_8a7c_947ff673f254.slice/crio-58ba90614d41bdc5c5b78de3603f770afb7f2d476f8ce14e4219f353162d7f61 WatchSource:0}: Error finding container 58ba90614d41bdc5c5b78de3603f770afb7f2d476f8ce14e4219f353162d7f61: Status 404 returned error can't find the container with id 58ba90614d41bdc5c5b78de3603f770afb7f2d476f8ce14e4219f353162d7f61 Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.205852 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a61fa08e-868a-4415-88d5-7ed0eebbeb45","Type":"ContainerStarted","Data":"7388896d8c8e466f6fceaf6796e3bd83cb406ea71301a12e5ae9b626081454c5"} Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.209368 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5","Type":"ContainerStarted","Data":"d42fb8b4a178f65e5d9a6f37454103c9b146f50a0c2805f2968ec18bab4db8bc"} Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.211293 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d0a3a285-364a-4df2-8a7c-947ff673f254","Type":"ContainerStarted","Data":"58ba90614d41bdc5c5b78de3603f770afb7f2d476f8ce14e4219f353162d7f61"} Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.250718 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:50:45 crc kubenswrapper[4845]: W0202 10:50:45.255250 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70739f91_4fde_4bc2_b4e1_5bdb7cb0426c.slice/crio-12d1eada5c850d971f8fc58216922ff0b3fd11985867d6916c8eee86d40b59bf WatchSource:0}: Error finding container 12d1eada5c850d971f8fc58216922ff0b3fd11985867d6916c8eee86d40b59bf: Status 404 returned error can't find the container with id 12d1eada5c850d971f8fc58216922ff0b3fd11985867d6916c8eee86d40b59bf Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.387423 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.390920 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.393523 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.394439 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.394576 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.394714 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-n7q8b" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.397365 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.404266 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.473695 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c7d4707-dfce-464f-bffe-0d543bea6299-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.473751 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-config-data-default\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.473809 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c7d4707-dfce-464f-bffe-0d543bea6299-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.473905 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e671cf92-e326-4d61-9db2-6fd521e03115\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e671cf92-e326-4d61-9db2-6fd521e03115\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.473941 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-kolla-config\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.473957 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7d4707-dfce-464f-bffe-0d543bea6299-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.474001 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw7rs\" (UniqueName: \"kubernetes.io/projected/0c7d4707-dfce-464f-bffe-0d543bea6299-kube-api-access-zw7rs\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.474020 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.575926 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e671cf92-e326-4d61-9db2-6fd521e03115\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e671cf92-e326-4d61-9db2-6fd521e03115\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.575986 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-kolla-config\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.576005 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7d4707-dfce-464f-bffe-0d543bea6299-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.576048 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw7rs\" (UniqueName: \"kubernetes.io/projected/0c7d4707-dfce-464f-bffe-0d543bea6299-kube-api-access-zw7rs\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.576069 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.576110 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c7d4707-dfce-464f-bffe-0d543bea6299-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.576132 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-config-data-default\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.576184 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c7d4707-dfce-464f-bffe-0d543bea6299-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.577723 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-kolla-config\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.581506 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-config-data-default\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.581553 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c7d4707-dfce-464f-bffe-0d543bea6299-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.581942 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c7d4707-dfce-464f-bffe-0d543bea6299-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.582344 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7d4707-dfce-464f-bffe-0d543bea6299-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.583307 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7d4707-dfce-464f-bffe-0d543bea6299-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.597847 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw7rs\" (UniqueName: \"kubernetes.io/projected/0c7d4707-dfce-464f-bffe-0d543bea6299-kube-api-access-zw7rs\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.603582 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.603740 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e671cf92-e326-4d61-9db2-6fd521e03115\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e671cf92-e326-4d61-9db2-6fd521e03115\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1d745164a5bb26e3bdf1097ec500132c10359a33d584430c1154c293f2cd55f5/globalmount\"" pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.701563 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e671cf92-e326-4d61-9db2-6fd521e03115\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e671cf92-e326-4d61-9db2-6fd521e03115\") pod \"openstack-galera-0\" (UID: \"0c7d4707-dfce-464f-bffe-0d543bea6299\") " pod="openstack/openstack-galera-0" Feb 02 10:50:45 crc kubenswrapper[4845]: I0202 10:50:45.732358 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.220288 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c","Type":"ContainerStarted","Data":"12d1eada5c850d971f8fc58216922ff0b3fd11985867d6916c8eee86d40b59bf"} Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.237956 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.238039 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.238125 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.239163 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b265cb3810e3935261baf8bbd2287ce4faf34ceae4eb09c4d8144e547b3debd5"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.239214 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://b265cb3810e3935261baf8bbd2287ce4faf34ceae4eb09c4d8144e547b3debd5" gracePeriod=600 Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.744832 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.746307 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.748879 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.749874 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.749972 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.749866 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-5qbzz" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.774654 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.911693 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ccf740-cc48-4863-8a7d-98548588860f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.912055 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-34bff2ba-0ab2-484f-8457-823a0af4066c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34bff2ba-0ab2-484f-8457-823a0af4066c\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.912105 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.912148 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ccf740-cc48-4863-8a7d-98548588860f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.912210 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25ccf740-cc48-4863-8a7d-98548588860f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.912228 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.912267 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:46 crc kubenswrapper[4845]: I0202 10:50:46.912293 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mxj7\" (UniqueName: \"kubernetes.io/projected/25ccf740-cc48-4863-8a7d-98548588860f-kube-api-access-4mxj7\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.014592 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.014643 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ccf740-cc48-4863-8a7d-98548588860f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.014701 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25ccf740-cc48-4863-8a7d-98548588860f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.014718 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.014760 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.014785 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mxj7\" (UniqueName: \"kubernetes.io/projected/25ccf740-cc48-4863-8a7d-98548588860f-kube-api-access-4mxj7\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.014823 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ccf740-cc48-4863-8a7d-98548588860f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.014857 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-34bff2ba-0ab2-484f-8457-823a0af4066c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34bff2ba-0ab2-484f-8457-823a0af4066c\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.019600 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.019683 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.019720 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25ccf740-cc48-4863-8a7d-98548588860f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.020471 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25ccf740-cc48-4863-8a7d-98548588860f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.022385 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.022417 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-34bff2ba-0ab2-484f-8457-823a0af4066c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34bff2ba-0ab2-484f-8457-823a0af4066c\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/73d5b639844b7af7811f16645b2a90d571f59d40f63aea33e6e979c9261cbf08/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.024381 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ccf740-cc48-4863-8a7d-98548588860f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.031963 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ccf740-cc48-4863-8a7d-98548588860f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.041864 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mxj7\" (UniqueName: \"kubernetes.io/projected/25ccf740-cc48-4863-8a7d-98548588860f-kube-api-access-4mxj7\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.079830 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-34bff2ba-0ab2-484f-8457-823a0af4066c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34bff2ba-0ab2-484f-8457-823a0af4066c\") pod \"openstack-cell1-galera-0\" (UID: \"25ccf740-cc48-4863-8a7d-98548588860f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.214507 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.215712 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.217713 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.217732 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.219876 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fzh7s" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.240153 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.250914 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="b265cb3810e3935261baf8bbd2287ce4faf34ceae4eb09c4d8144e547b3debd5" exitCode=0 Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.250973 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"b265cb3810e3935261baf8bbd2287ce4faf34ceae4eb09c4d8144e547b3debd5"} Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.251013 4845 scope.go:117] "RemoveContainer" containerID="5c8a61ef5e1d6c97c545382d55b8a80c690bc952b158b0bc2a66b1f6b33d1ffd" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.320385 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34640b6-49ff-4638-bde8-1bc32e658907-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.320485 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b34640b6-49ff-4638-bde8-1bc32e658907-config-data\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.321287 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b34640b6-49ff-4638-bde8-1bc32e658907-kolla-config\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.321598 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf97q\" (UniqueName: \"kubernetes.io/projected/b34640b6-49ff-4638-bde8-1bc32e658907-kube-api-access-hf97q\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.321635 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b34640b6-49ff-4638-bde8-1bc32e658907-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.369647 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.423652 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b34640b6-49ff-4638-bde8-1bc32e658907-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.423790 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34640b6-49ff-4638-bde8-1bc32e658907-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.423840 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b34640b6-49ff-4638-bde8-1bc32e658907-config-data\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.423900 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b34640b6-49ff-4638-bde8-1bc32e658907-kolla-config\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.423925 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf97q\" (UniqueName: \"kubernetes.io/projected/b34640b6-49ff-4638-bde8-1bc32e658907-kube-api-access-hf97q\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.425043 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b34640b6-49ff-4638-bde8-1bc32e658907-config-data\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.425140 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b34640b6-49ff-4638-bde8-1bc32e658907-kolla-config\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.438172 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b34640b6-49ff-4638-bde8-1bc32e658907-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.438251 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34640b6-49ff-4638-bde8-1bc32e658907-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.453542 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf97q\" (UniqueName: \"kubernetes.io/projected/b34640b6-49ff-4638-bde8-1bc32e658907-kube-api-access-hf97q\") pod \"memcached-0\" (UID: \"b34640b6-49ff-4638-bde8-1bc32e658907\") " pod="openstack/memcached-0" Feb 02 10:50:47 crc kubenswrapper[4845]: I0202 10:50:47.548793 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 10:50:48 crc kubenswrapper[4845]: I0202 10:50:48.966056 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:50:48 crc kubenswrapper[4845]: I0202 10:50:48.970900 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:50:48 crc kubenswrapper[4845]: I0202 10:50:48.977962 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g5w45" Feb 02 10:50:48 crc kubenswrapper[4845]: I0202 10:50:48.987463 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.062047 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr6hd\" (UniqueName: \"kubernetes.io/projected/a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31-kube-api-access-pr6hd\") pod \"kube-state-metrics-0\" (UID: \"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31\") " pod="openstack/kube-state-metrics-0" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.165101 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr6hd\" (UniqueName: \"kubernetes.io/projected/a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31-kube-api-access-pr6hd\") pod \"kube-state-metrics-0\" (UID: \"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31\") " pod="openstack/kube-state-metrics-0" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.202617 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr6hd\" (UniqueName: \"kubernetes.io/projected/a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31-kube-api-access-pr6hd\") pod \"kube-state-metrics-0\" (UID: \"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31\") " pod="openstack/kube-state-metrics-0" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.298326 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.783593 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl"] Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.785115 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.794170 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.794280 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-zqrg9" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.800831 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl"] Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.894947 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfh4\" (UniqueName: \"kubernetes.io/projected/0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b-kube-api-access-cvfh4\") pod \"observability-ui-dashboards-66cbf594b5-8x2hl\" (UID: \"0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.895322 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8x2hl\" (UID: \"0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.997643 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8x2hl\" (UID: \"0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" Feb 02 10:50:49 crc kubenswrapper[4845]: I0202 10:50:49.997752 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfh4\" (UniqueName: \"kubernetes.io/projected/0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b-kube-api-access-cvfh4\") pod \"observability-ui-dashboards-66cbf594b5-8x2hl\" (UID: \"0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.002724 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-8x2hl\" (UID: \"0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.027638 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfh4\" (UniqueName: \"kubernetes.io/projected/0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b-kube-api-access-cvfh4\") pod \"observability-ui-dashboards-66cbf594b5-8x2hl\" (UID: \"0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.125412 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.195454 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fd44458cd-cp9b7"] Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.196639 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.206386 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fd44458cd-cp9b7"] Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.267934 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.270370 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.272516 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.272553 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.272577 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-wp8jb" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.272516 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.273536 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.273699 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.273815 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.281411 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.287536 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.308964 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-service-ca\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.309011 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-oauth-serving-cert\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.309056 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-config\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.309074 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-oauth-config\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.309141 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khm9t\" (UniqueName: \"kubernetes.io/projected/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-kube-api-access-khm9t\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.309169 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-serving-cert\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.309190 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-trusted-ca-bundle\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.411999 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-oauth-serving-cert\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412074 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5br\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-kube-api-access-md5br\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412099 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-config\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412116 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-oauth-config\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412133 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412163 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412191 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412230 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412273 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412301 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khm9t\" (UniqueName: \"kubernetes.io/projected/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-kube-api-access-khm9t\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412345 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-serving-cert\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412382 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-trusted-ca-bundle\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412410 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b04f366-8a31-4d2e-8d11-e8682d578a07-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412453 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412486 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412525 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412563 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-service-ca\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.412923 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-config\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.413369 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-service-ca\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.414010 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-trusted-ca-bundle\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.414426 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-oauth-serving-cert\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.417357 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-oauth-config\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.419371 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-console-serving-cert\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.432000 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khm9t\" (UniqueName: \"kubernetes.io/projected/d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3-kube-api-access-khm9t\") pod \"console-6fd44458cd-cp9b7\" (UID: \"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3\") " pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.515827 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b04f366-8a31-4d2e-8d11-e8682d578a07-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.516145 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.516625 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.516712 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.516847 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5br\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-kube-api-access-md5br\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.516951 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.517042 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.517118 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.517201 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.517301 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.521412 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.521489 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.521813 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.523105 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.523386 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.523412 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9b560f795087ddb8e1c0fbe0076d2f0e9dba0d3739abc904f350829f75b851b7/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.524083 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.526338 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.527701 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.527735 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.530373 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b04f366-8a31-4d2e-8d11-e8682d578a07-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.534351 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5br\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-kube-api-access-md5br\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.567093 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"prometheus-metric-storage-0\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:50 crc kubenswrapper[4845]: I0202 10:50:50.588949 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.471015 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.475681 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.478781 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.479954 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.480081 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.480142 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.480192 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dwn89" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.480362 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.524025 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tt4db"] Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.525288 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.528317 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.528514 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-49vj7" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.528689 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.532324 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tt4db"] Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.541612 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9qwr2"] Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.544687 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.549034 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9qwr2"] Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.579170 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-884e4ed0-8abb-4b74-99c7-8a8968ac54b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-884e4ed0-8abb-4b74-99c7-8a8968ac54b5\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.579341 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.579436 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.581019 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.581362 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.581645 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.581721 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.581752 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc7xb\" (UniqueName: \"kubernetes.io/projected/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-kube-api-access-wc7xb\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683688 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683742 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683781 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683800 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zzg\" (UniqueName: \"kubernetes.io/projected/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-kube-api-access-j9zzg\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683822 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc7xb\" (UniqueName: \"kubernetes.io/projected/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-kube-api-access-wc7xb\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683856 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-run\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683880 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-884e4ed0-8abb-4b74-99c7-8a8968ac54b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-884e4ed0-8abb-4b74-99c7-8a8968ac54b5\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683918 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72da7703-b176-47cb-953e-de037d663c55-scripts\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683942 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-log\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683970 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.683990 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684005 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684020 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72da7703-b176-47cb-953e-de037d663c55-combined-ca-bundle\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684045 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-lib\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684064 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-run\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684085 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-run-ovn\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684102 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p245\" (UniqueName: \"kubernetes.io/projected/72da7703-b176-47cb-953e-de037d663c55-kube-api-access-2p245\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684146 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/72da7703-b176-47cb-953e-de037d663c55-ovn-controller-tls-certs\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684167 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-etc-ovs\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684183 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-scripts\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.684204 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-log-ovn\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.685433 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.686067 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.686718 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.688704 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.688725 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-884e4ed0-8abb-4b74-99c7-8a8968ac54b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-884e4ed0-8abb-4b74-99c7-8a8968ac54b5\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d0d0596cc78b094b683b64f226aabe0434fe7e7a7a678c91f7b419e7a94390d1/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.689729 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.702850 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.703621 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.705307 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc7xb\" (UniqueName: \"kubernetes.io/projected/bd4a7449-0e37-44e1-9f01-bb1a336cb8cd-kube-api-access-wc7xb\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.740535 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-884e4ed0-8abb-4b74-99c7-8a8968ac54b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-884e4ed0-8abb-4b74-99c7-8a8968ac54b5\") pod \"ovsdbserver-nb-0\" (UID: \"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.786275 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72da7703-b176-47cb-953e-de037d663c55-scripts\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.786665 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-log\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.786819 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72da7703-b176-47cb-953e-de037d663c55-combined-ca-bundle\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787015 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-lib\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787144 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-run\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787263 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-run-ovn\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787352 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p245\" (UniqueName: \"kubernetes.io/projected/72da7703-b176-47cb-953e-de037d663c55-kube-api-access-2p245\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787451 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/72da7703-b176-47cb-953e-de037d663c55-ovn-controller-tls-certs\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787530 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-etc-ovs\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787221 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-log\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787688 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-scripts\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787809 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-log-ovn\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.787869 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-lib\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.788050 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zzg\" (UniqueName: \"kubernetes.io/projected/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-kube-api-access-j9zzg\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.788164 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-run-ovn\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.788171 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-run\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.788398 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-log-ovn\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.788457 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72da7703-b176-47cb-953e-de037d663c55-scripts\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.788465 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72da7703-b176-47cb-953e-de037d663c55-var-run\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.788575 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-var-run\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.788601 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-etc-ovs\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.790726 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72da7703-b176-47cb-953e-de037d663c55-combined-ca-bundle\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.792022 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-scripts\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.797712 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/72da7703-b176-47cb-953e-de037d663c55-ovn-controller-tls-certs\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.808259 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p245\" (UniqueName: \"kubernetes.io/projected/72da7703-b176-47cb-953e-de037d663c55-kube-api-access-2p245\") pod \"ovn-controller-tt4db\" (UID: \"72da7703-b176-47cb-953e-de037d663c55\") " pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.813866 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.817592 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zzg\" (UniqueName: \"kubernetes.io/projected/4f430e6a-b6ca-42b5-bb37-e5104bba0bd1-kube-api-access-j9zzg\") pod \"ovn-controller-ovs-9qwr2\" (UID: \"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1\") " pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.846433 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tt4db" Feb 02 10:50:53 crc kubenswrapper[4845]: I0202 10:50:53.868837 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.325589 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.327558 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.333438 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-glfsq" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.333801 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.333994 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.338683 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.341496 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.447198 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.447250 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67a51964-326b-42cd-8055-0822d42557f7-config\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.447284 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.447362 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67a51964-326b-42cd-8055-0822d42557f7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.447385 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl8g9\" (UniqueName: \"kubernetes.io/projected/67a51964-326b-42cd-8055-0822d42557f7-kube-api-access-zl8g9\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.447417 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8747bc56-b51a-4599-a12b-1803202b49b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8747bc56-b51a-4599-a12b-1803202b49b7\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.447443 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.447577 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67a51964-326b-42cd-8055-0822d42557f7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.549576 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67a51964-326b-42cd-8055-0822d42557f7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.549661 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.549686 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67a51964-326b-42cd-8055-0822d42557f7-config\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.549713 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.549779 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67a51964-326b-42cd-8055-0822d42557f7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.549801 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl8g9\" (UniqueName: \"kubernetes.io/projected/67a51964-326b-42cd-8055-0822d42557f7-kube-api-access-zl8g9\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.549833 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8747bc56-b51a-4599-a12b-1803202b49b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8747bc56-b51a-4599-a12b-1803202b49b7\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.549864 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.551038 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67a51964-326b-42cd-8055-0822d42557f7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.553453 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67a51964-326b-42cd-8055-0822d42557f7-config\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.553546 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67a51964-326b-42cd-8055-0822d42557f7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.555452 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.556946 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.557879 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/67a51964-326b-42cd-8055-0822d42557f7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.558263 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.558299 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8747bc56-b51a-4599-a12b-1803202b49b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8747bc56-b51a-4599-a12b-1803202b49b7\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/30b318debfe56b2d398558b7756ea4b3cc9937d15d6788c6c495212b4197659e/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.572468 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl8g9\" (UniqueName: \"kubernetes.io/projected/67a51964-326b-42cd-8055-0822d42557f7-kube-api-access-zl8g9\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.603110 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8747bc56-b51a-4599-a12b-1803202b49b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8747bc56-b51a-4599-a12b-1803202b49b7\") pod \"ovsdbserver-sb-0\" (UID: \"67a51964-326b-42cd-8055-0822d42557f7\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:50:56 crc kubenswrapper[4845]: I0202 10:50:56.654138 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 10:51:01 crc kubenswrapper[4845]: E0202 10:51:01.854210 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:51:01 crc kubenswrapper[4845]: E0202 10:51:01.854657 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6dgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-j9fpl_openstack(a5ff22c8-ded6-4209-9503-f1e66526c1d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:51:01 crc kubenswrapper[4845]: E0202 10:51:01.855877 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" podUID="a5ff22c8-ded6-4209-9503-f1e66526c1d5" Feb 02 10:51:02 crc kubenswrapper[4845]: E0202 10:51:02.922393 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:51:02 crc kubenswrapper[4845]: E0202 10:51:02.922574 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwxqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-pxsn7_openstack(b15a2eeb-8248-40e0-b9a6-294ed99f1177): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:51:02 crc kubenswrapper[4845]: E0202 10:51:02.923751 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" podUID="b15a2eeb-8248-40e0-b9a6-294ed99f1177" Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.116284 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.188295 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6dgn\" (UniqueName: \"kubernetes.io/projected/a5ff22c8-ded6-4209-9503-f1e66526c1d5-kube-api-access-h6dgn\") pod \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\" (UID: \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\") " Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.188777 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ff22c8-ded6-4209-9503-f1e66526c1d5-config\") pod \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\" (UID: \"a5ff22c8-ded6-4209-9503-f1e66526c1d5\") " Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.189331 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ff22c8-ded6-4209-9503-f1e66526c1d5-config" (OuterVolumeSpecName: "config") pod "a5ff22c8-ded6-4209-9503-f1e66526c1d5" (UID: "a5ff22c8-ded6-4209-9503-f1e66526c1d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.189958 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ff22c8-ded6-4209-9503-f1e66526c1d5-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.197658 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ff22c8-ded6-4209-9503-f1e66526c1d5-kube-api-access-h6dgn" (OuterVolumeSpecName: "kube-api-access-h6dgn") pod "a5ff22c8-ded6-4209-9503-f1e66526c1d5" (UID: "a5ff22c8-ded6-4209-9503-f1e66526c1d5"). InnerVolumeSpecName "kube-api-access-h6dgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.293381 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6dgn\" (UniqueName: \"kubernetes.io/projected/a5ff22c8-ded6-4209-9503-f1e66526c1d5-kube-api-access-h6dgn\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.381933 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:51:03 crc kubenswrapper[4845]: W0202 10:51:03.416216 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25ccf740_cc48_4863_8a7d_98548588860f.slice/crio-c4a023146aca1659ede14943dfd5235cc6dcd47c52eb513fd17537bff2089179 WatchSource:0}: Error finding container c4a023146aca1659ede14943dfd5235cc6dcd47c52eb513fd17537bff2089179: Status 404 returned error can't find the container with id c4a023146aca1659ede14943dfd5235cc6dcd47c52eb513fd17537bff2089179 Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.500976 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25ccf740-cc48-4863-8a7d-98548588860f","Type":"ContainerStarted","Data":"c4a023146aca1659ede14943dfd5235cc6dcd47c52eb513fd17537bff2089179"} Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.502045 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" event={"ID":"a5ff22c8-ded6-4209-9503-f1e66526c1d5","Type":"ContainerDied","Data":"3feed3c8637a37b93c2f596a1883e1905a851bfbc1c2fe28022e86b8367a3d6d"} Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.502116 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-j9fpl" Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.601540 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j9fpl"] Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.626856 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-j9fpl"] Feb 02 10:51:03 crc kubenswrapper[4845]: I0202 10:51:03.731448 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ff22c8-ded6-4209-9503-f1e66526c1d5" path="/var/lib/kubelet/pods/a5ff22c8-ded6-4209-9503-f1e66526c1d5/volumes" Feb 02 10:51:04 crc kubenswrapper[4845]: W0202 10:51:04.139228 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c7d4707_dfce_464f_bffe_0d543bea6299.slice/crio-45fa2ff82f9ba3fc9efe2a936e5059929a453bdaa544842bd03722f59f5c4e39 WatchSource:0}: Error finding container 45fa2ff82f9ba3fc9efe2a936e5059929a453bdaa544842bd03722f59f5c4e39: Status 404 returned error can't find the container with id 45fa2ff82f9ba3fc9efe2a936e5059929a453bdaa544842bd03722f59f5c4e39 Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.144808 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.190142 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.203719 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tt4db"] Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.429300 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl"] Feb 02 10:51:04 crc kubenswrapper[4845]: W0202 10:51:04.440521 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34640b6_49ff_4638_bde8_1bc32e658907.slice/crio-411829893a7b733a7a95d161390de8ff280d723162448f3b6e737696178e1fb9 WatchSource:0}: Error finding container 411829893a7b733a7a95d161390de8ff280d723162448f3b6e737696178e1fb9: Status 404 returned error can't find the container with id 411829893a7b733a7a95d161390de8ff280d723162448f3b6e737696178e1fb9 Feb 02 10:51:04 crc kubenswrapper[4845]: W0202 10:51:04.450299 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd36a22_def8_4ad5_b1b2_ac23ef1ea70b.slice/crio-c77dfff68d255c219e308dea922d30a44337fe550725d5f01e63c26e179fea66 WatchSource:0}: Error finding container c77dfff68d255c219e308dea922d30a44337fe550725d5f01e63c26e179fea66: Status 404 returned error can't find the container with id c77dfff68d255c219e308dea922d30a44337fe550725d5f01e63c26e179fea66 Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.527447 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0c7d4707-dfce-464f-bffe-0d543bea6299","Type":"ContainerStarted","Data":"45fa2ff82f9ba3fc9efe2a936e5059929a453bdaa544842bd03722f59f5c4e39"} Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.530686 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" event={"ID":"0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b","Type":"ContainerStarted","Data":"c77dfff68d255c219e308dea922d30a44337fe550725d5f01e63c26e179fea66"} Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.533920 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tt4db" event={"ID":"72da7703-b176-47cb-953e-de037d663c55","Type":"ContainerStarted","Data":"0508e98d889b1df3d4d2d7fdd99890c969a037aa3b75615bdf7a584fc4c78fe5"} Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.542823 4845 generic.go:334] "Generic (PLEG): container finished" podID="511adc55-f919-42e9-961d-94565550d668" containerID="56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c" exitCode=0 Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.542930 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" event={"ID":"511adc55-f919-42e9-961d-94565550d668","Type":"ContainerDied","Data":"56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c"} Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.544570 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" event={"ID":"b15a2eeb-8248-40e0-b9a6-294ed99f1177","Type":"ContainerDied","Data":"541d01d5e724cfd983149b2a3df13b41fdb7aca180b15b693908ce4201c3363e"} Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.544614 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541d01d5e724cfd983149b2a3df13b41fdb7aca180b15b693908ce4201c3363e" Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.554583 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"6667d6885fd474a5baafce195af3c9008051b075b4b764b236fc396ff08f675c"} Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.562373 4845 generic.go:334] "Generic (PLEG): container finished" podID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" containerID="932d8191c1cbfeca1c9bdbbf6bdc47b46318b7170046d467a566c55700027df8" exitCode=0 Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.562834 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" event={"ID":"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e","Type":"ContainerDied","Data":"932d8191c1cbfeca1c9bdbbf6bdc47b46318b7170046d467a566c55700027df8"} Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.570411 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b34640b6-49ff-4638-bde8-1bc32e658907","Type":"ContainerStarted","Data":"411829893a7b733a7a95d161390de8ff280d723162448f3b6e737696178e1fb9"} Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.571583 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fd44458cd-cp9b7"] Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.587189 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.672101 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.722287 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-dns-svc\") pod \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.723001 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-config\") pod \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.724322 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwxqg\" (UniqueName: \"kubernetes.io/projected/b15a2eeb-8248-40e0-b9a6-294ed99f1177-kube-api-access-cwxqg\") pod \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\" (UID: \"b15a2eeb-8248-40e0-b9a6-294ed99f1177\") " Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.727355 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b15a2eeb-8248-40e0-b9a6-294ed99f1177" (UID: "b15a2eeb-8248-40e0-b9a6-294ed99f1177"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.727673 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-config" (OuterVolumeSpecName: "config") pod "b15a2eeb-8248-40e0-b9a6-294ed99f1177" (UID: "b15a2eeb-8248-40e0-b9a6-294ed99f1177"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.770506 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15a2eeb-8248-40e0-b9a6-294ed99f1177-kube-api-access-cwxqg" (OuterVolumeSpecName: "kube-api-access-cwxqg") pod "b15a2eeb-8248-40e0-b9a6-294ed99f1177" (UID: "b15a2eeb-8248-40e0-b9a6-294ed99f1177"). InnerVolumeSpecName "kube-api-access-cwxqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.833969 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.834015 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwxqg\" (UniqueName: \"kubernetes.io/projected/b15a2eeb-8248-40e0-b9a6-294ed99f1177-kube-api-access-cwxqg\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.834031 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15a2eeb-8248-40e0-b9a6-294ed99f1177-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.852515 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:51:04 crc kubenswrapper[4845]: W0202 10:51:04.870626 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7c0daea_a5c7_4695_bd4f_ad9a3aaf7d31.slice/crio-e2f9a94d7e921b062f55ff2afa3f02c589bdda200432ed3b8b900c15b062f04e WatchSource:0}: Error finding container e2f9a94d7e921b062f55ff2afa3f02c589bdda200432ed3b8b900c15b062f04e: Status 404 returned error can't find the container with id e2f9a94d7e921b062f55ff2afa3f02c589bdda200432ed3b8b900c15b062f04e Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.879432 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:51:04 crc kubenswrapper[4845]: I0202 10:51:04.983209 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.079844 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9qwr2"] Feb 02 10:51:05 crc kubenswrapper[4845]: W0202 10:51:05.102465 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f430e6a_b6ca_42b5_bb37_e5104bba0bd1.slice/crio-2e5190a29ed4be804089bade4de5c8fb5a3f27a51fe28535025b80834854824c WatchSource:0}: Error finding container 2e5190a29ed4be804089bade4de5c8fb5a3f27a51fe28535025b80834854824c: Status 404 returned error can't find the container with id 2e5190a29ed4be804089bade4de5c8fb5a3f27a51fe28535025b80834854824c Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.585510 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5","Type":"ContainerStarted","Data":"b3ca7a9a30c5133d142f8240cf49d184ac360dc0f74e089c87c96d3a92c7d96d"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.590086 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.593392 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" event={"ID":"511adc55-f919-42e9-961d-94565550d668","Type":"ContainerStarted","Data":"9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.594142 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.596380 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c","Type":"ContainerStarted","Data":"16a87dbd784c474f3d4b29bfb2f9739515e8d502ec347cc5ddd63af0721bc8af"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.606091 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fd44458cd-cp9b7" event={"ID":"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3","Type":"ContainerStarted","Data":"a8d174f01b9c46523f1f8f4592326ffbc634985d50e21741e0617cfbbbfb4dc8"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.606139 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fd44458cd-cp9b7" event={"ID":"d722cd7f-7c4d-4062-8aeb-ff1a8b8c6cb3","Type":"ContainerStarted","Data":"98cceb545a8cf0dd260405b8d67f50cec74f748ea790ab4058c268d633c80df0"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.608080 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerStarted","Data":"f66ab08a88fdf01ed8eac1ea6cefb40d4702621c1aec3526c050777cfd6e0be7"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.613993 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31","Type":"ContainerStarted","Data":"e2f9a94d7e921b062f55ff2afa3f02c589bdda200432ed3b8b900c15b062f04e"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.623359 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qwr2" event={"ID":"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1","Type":"ContainerStarted","Data":"2e5190a29ed4be804089bade4de5c8fb5a3f27a51fe28535025b80834854824c"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.625465 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd","Type":"ContainerStarted","Data":"95438a6d7fca85c477f7c0194280c32e6fdf5c2cf8a4182b711e5fb0c2b63950"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.626576 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"67a51964-326b-42cd-8055-0822d42557f7","Type":"ContainerStarted","Data":"6df2ca22d7d94445a4e933111b388c806c2aac1ab6d0007aadbbd463ad1bd576"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.643627 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d0a3a285-364a-4df2-8a7c-947ff673f254","Type":"ContainerStarted","Data":"93c6ac3518c8da645fe78eb56c15a31adc12e1d3538e14b7359e676cb11918c9"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.645578 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" podStartSLOduration=4.024834767 podStartE2EDuration="23.645552394s" podCreationTimestamp="2026-02-02 10:50:42 +0000 UTC" firstStartedPulling="2026-02-02 10:50:43.609465266 +0000 UTC m=+1124.700866716" lastFinishedPulling="2026-02-02 10:51:03.230182893 +0000 UTC m=+1144.321584343" observedRunningTime="2026-02-02 10:51:05.640177262 +0000 UTC m=+1146.731578712" watchObservedRunningTime="2026-02-02 10:51:05.645552394 +0000 UTC m=+1146.736953844" Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.653133 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pxsn7" Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.653127 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a61fa08e-868a-4415-88d5-7ed0eebbeb45","Type":"ContainerStarted","Data":"687c373278d72892bc7af53f43d957e03afb3a8da90e855595ea0df53482a4a4"} Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.687566 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" podStartSLOduration=4.279756574 podStartE2EDuration="23.687538191s" podCreationTimestamp="2026-02-02 10:50:42 +0000 UTC" firstStartedPulling="2026-02-02 10:50:43.91517315 +0000 UTC m=+1125.006574600" lastFinishedPulling="2026-02-02 10:51:03.322954777 +0000 UTC m=+1144.414356217" observedRunningTime="2026-02-02 10:51:05.663753828 +0000 UTC m=+1146.755155278" watchObservedRunningTime="2026-02-02 10:51:05.687538191 +0000 UTC m=+1146.778939641" Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.731559 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fd44458cd-cp9b7" podStartSLOduration=15.731533435 podStartE2EDuration="15.731533435s" podCreationTimestamp="2026-02-02 10:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:05.718168797 +0000 UTC m=+1146.809570247" watchObservedRunningTime="2026-02-02 10:51:05.731533435 +0000 UTC m=+1146.822934885" Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.808925 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pxsn7"] Feb 02 10:51:05 crc kubenswrapper[4845]: I0202 10:51:05.824281 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pxsn7"] Feb 02 10:51:06 crc kubenswrapper[4845]: I0202 10:51:06.665715 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" event={"ID":"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e","Type":"ContainerStarted","Data":"81b2eedcdbc73132319f670807bdc308fb485bc4c468651b7572510bbe0cf821"} Feb 02 10:51:07 crc kubenswrapper[4845]: I0202 10:51:07.733542 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15a2eeb-8248-40e0-b9a6-294ed99f1177" path="/var/lib/kubelet/pods/b15a2eeb-8248-40e0-b9a6-294ed99f1177/volumes" Feb 02 10:51:10 crc kubenswrapper[4845]: I0202 10:51:10.524848 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:51:10 crc kubenswrapper[4845]: I0202 10:51:10.525223 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:51:10 crc kubenswrapper[4845]: I0202 10:51:10.530687 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:51:10 crc kubenswrapper[4845]: I0202 10:51:10.712568 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fd44458cd-cp9b7" Feb 02 10:51:10 crc kubenswrapper[4845]: I0202 10:51:10.800598 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58d87f97d7-w9v5x"] Feb 02 10:51:13 crc kubenswrapper[4845]: I0202 10:51:13.016391 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:51:13 crc kubenswrapper[4845]: I0202 10:51:13.364070 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:51:13 crc kubenswrapper[4845]: I0202 10:51:13.428739 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d26cn"] Feb 02 10:51:13 crc kubenswrapper[4845]: I0202 10:51:13.735709 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" podUID="511adc55-f919-42e9-961d-94565550d668" containerName="dnsmasq-dns" containerID="cri-o://9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373" gracePeriod=10 Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.635765 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.747997 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd","Type":"ContainerStarted","Data":"8cd44ac549a9b7bbdf2a3f50bece178a504d7986304b63eb9f84ea47478bdd2b"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.749143 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"67a51964-326b-42cd-8055-0822d42557f7","Type":"ContainerStarted","Data":"d17f965a53a332e865cdd52457dfdc1cc0e391ee5f5094a087a7141a6440ebfe"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.750117 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b34640b6-49ff-4638-bde8-1bc32e658907","Type":"ContainerStarted","Data":"a4cca1551b380ad6feeb160bd7b90b0e4b4dddb01e1cf77ece2fb25e0c5b17ad"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.750213 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.752064 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" event={"ID":"0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b","Type":"ContainerStarted","Data":"ed8bcf9838b5585148c683265901fc500970a70f9c0152f854d83a9686a534d1"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.754220 4845 generic.go:334] "Generic (PLEG): container finished" podID="511adc55-f919-42e9-961d-94565550d668" containerID="9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373" exitCode=0 Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.754255 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" event={"ID":"511adc55-f919-42e9-961d-94565550d668","Type":"ContainerDied","Data":"9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.754279 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.754292 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-d26cn" event={"ID":"511adc55-f919-42e9-961d-94565550d668","Type":"ContainerDied","Data":"d2ea1511585c52b6d62ad84745c6c0e0adbcd9ee5f53b0b69c86b9217ec09f38"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.754310 4845 scope.go:117] "RemoveContainer" containerID="9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.755946 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qwr2" event={"ID":"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1","Type":"ContainerStarted","Data":"f31c8b5c6428576002537de155a0271f970a073899cc982104158309064e40b1"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.757751 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25ccf740-cc48-4863-8a7d-98548588860f","Type":"ContainerStarted","Data":"faa9153539236c39a3f27d3d9abf674976b87091642cf826d02bd0acfc5c4742"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.759817 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31","Type":"ContainerStarted","Data":"cacef27d0dbc8696a18778de5ec2dfe7815ec16e5c7a5beb939fff7f7c4e5a61"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.759979 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.762707 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0c7d4707-dfce-464f-bffe-0d543bea6299","Type":"ContainerStarted","Data":"5758beae08428aed0b9ac178c5180e65e78f2dc8d7e70739c722b61d04e15365"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.768047 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tt4db" event={"ID":"72da7703-b176-47cb-953e-de037d663c55","Type":"ContainerStarted","Data":"f23f3930f4c9ee89bf618036dd5d5758a01eef56ce7dab5c6abb14874ac1a05c"} Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.768197 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-tt4db" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.772951 4845 scope.go:117] "RemoveContainer" containerID="56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.778612 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.814882081 podStartE2EDuration="27.778593614s" podCreationTimestamp="2026-02-02 10:50:47 +0000 UTC" firstStartedPulling="2026-02-02 10:51:04.444229674 +0000 UTC m=+1145.535631124" lastFinishedPulling="2026-02-02 10:51:12.407941207 +0000 UTC m=+1153.499342657" observedRunningTime="2026-02-02 10:51:14.777145333 +0000 UTC m=+1155.868546783" watchObservedRunningTime="2026-02-02 10:51:14.778593614 +0000 UTC m=+1155.869995064" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.799041 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-dns-svc\") pod \"511adc55-f919-42e9-961d-94565550d668\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.799161 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ktfp\" (UniqueName: \"kubernetes.io/projected/511adc55-f919-42e9-961d-94565550d668-kube-api-access-4ktfp\") pod \"511adc55-f919-42e9-961d-94565550d668\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.799840 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-config\") pod \"511adc55-f919-42e9-961d-94565550d668\" (UID: \"511adc55-f919-42e9-961d-94565550d668\") " Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.806419 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511adc55-f919-42e9-961d-94565550d668-kube-api-access-4ktfp" (OuterVolumeSpecName: "kube-api-access-4ktfp") pod "511adc55-f919-42e9-961d-94565550d668" (UID: "511adc55-f919-42e9-961d-94565550d668"). InnerVolumeSpecName "kube-api-access-4ktfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.808718 4845 scope.go:117] "RemoveContainer" containerID="9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373" Feb 02 10:51:14 crc kubenswrapper[4845]: E0202 10:51:14.809441 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373\": container with ID starting with 9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373 not found: ID does not exist" containerID="9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.809492 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373"} err="failed to get container status \"9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373\": rpc error: code = NotFound desc = could not find container \"9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373\": container with ID starting with 9cc71f92804d6cd2b52fa4cd13c0548baa684d13d966ca40265d2663512ea373 not found: ID does not exist" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.809511 4845 scope.go:117] "RemoveContainer" containerID="56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c" Feb 02 10:51:14 crc kubenswrapper[4845]: E0202 10:51:14.811236 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c\": container with ID starting with 56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c not found: ID does not exist" containerID="56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.811281 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c"} err="failed to get container status \"56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c\": rpc error: code = NotFound desc = could not find container \"56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c\": container with ID starting with 56734a774259fb63015cd94920845626bdeb21ac388842f459cf943fcd96e34c not found: ID does not exist" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.849421 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-tt4db" podStartSLOduration=13.629043785 podStartE2EDuration="21.849393906s" podCreationTimestamp="2026-02-02 10:50:53 +0000 UTC" firstStartedPulling="2026-02-02 10:51:04.451503959 +0000 UTC m=+1145.542905409" lastFinishedPulling="2026-02-02 10:51:12.67185408 +0000 UTC m=+1153.763255530" observedRunningTime="2026-02-02 10:51:14.843714625 +0000 UTC m=+1155.935116075" watchObservedRunningTime="2026-02-02 10:51:14.849393906 +0000 UTC m=+1155.940795376" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.875772 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.436361357 podStartE2EDuration="26.875736991s" podCreationTimestamp="2026-02-02 10:50:48 +0000 UTC" firstStartedPulling="2026-02-02 10:51:04.907415751 +0000 UTC m=+1145.998817201" lastFinishedPulling="2026-02-02 10:51:13.346791385 +0000 UTC m=+1154.438192835" observedRunningTime="2026-02-02 10:51:14.873308702 +0000 UTC m=+1155.964710172" watchObservedRunningTime="2026-02-02 10:51:14.875736991 +0000 UTC m=+1155.967138441" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.903461 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ktfp\" (UniqueName: \"kubernetes.io/projected/511adc55-f919-42e9-961d-94565550d668-kube-api-access-4ktfp\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:14 crc kubenswrapper[4845]: I0202 10:51:14.948392 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-8x2hl" podStartSLOduration=17.806125183 podStartE2EDuration="25.948342724s" podCreationTimestamp="2026-02-02 10:50:49 +0000 UTC" firstStartedPulling="2026-02-02 10:51:04.455516353 +0000 UTC m=+1145.546917803" lastFinishedPulling="2026-02-02 10:51:12.597733904 +0000 UTC m=+1153.689135344" observedRunningTime="2026-02-02 10:51:14.939324799 +0000 UTC m=+1156.030726249" watchObservedRunningTime="2026-02-02 10:51:14.948342724 +0000 UTC m=+1156.039744194" Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.056293 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "511adc55-f919-42e9-961d-94565550d668" (UID: "511adc55-f919-42e9-961d-94565550d668"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.107667 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.159318 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-config" (OuterVolumeSpecName: "config") pod "511adc55-f919-42e9-961d-94565550d668" (UID: "511adc55-f919-42e9-961d-94565550d668"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.209460 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/511adc55-f919-42e9-961d-94565550d668-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.600743 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d26cn"] Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.608061 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-d26cn"] Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.723573 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511adc55-f919-42e9-961d-94565550d668" path="/var/lib/kubelet/pods/511adc55-f919-42e9-961d-94565550d668/volumes" Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.778791 4845 generic.go:334] "Generic (PLEG): container finished" podID="4f430e6a-b6ca-42b5-bb37-e5104bba0bd1" containerID="f31c8b5c6428576002537de155a0271f970a073899cc982104158309064e40b1" exitCode=0 Feb 02 10:51:15 crc kubenswrapper[4845]: I0202 10:51:15.778844 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qwr2" event={"ID":"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1","Type":"ContainerDied","Data":"f31c8b5c6428576002537de155a0271f970a073899cc982104158309064e40b1"} Feb 02 10:51:19 crc kubenswrapper[4845]: I0202 10:51:19.305571 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 10:51:19 crc kubenswrapper[4845]: I0202 10:51:19.826490 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerStarted","Data":"f241191074a6c7fafb8932f36a972acd0bd84c8ac50c73e751b3fdd46aa2e817"} Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.843538 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bd4a7449-0e37-44e1-9f01-bb1a336cb8cd","Type":"ContainerStarted","Data":"6eed53d7553fe40b78e6d4f2f944d8588f1d87db411939c19d3f41264a40d239"} Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.846338 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"67a51964-326b-42cd-8055-0822d42557f7","Type":"ContainerStarted","Data":"921d37cec6df5cd7bc740716d3628321e09070d72288a4d06af5c0cce717e5d0"} Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.851362 4845 generic.go:334] "Generic (PLEG): container finished" podID="0c7d4707-dfce-464f-bffe-0d543bea6299" containerID="5758beae08428aed0b9ac178c5180e65e78f2dc8d7e70739c722b61d04e15365" exitCode=0 Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.851429 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0c7d4707-dfce-464f-bffe-0d543bea6299","Type":"ContainerDied","Data":"5758beae08428aed0b9ac178c5180e65e78f2dc8d7e70739c722b61d04e15365"} Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.856429 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qwr2" event={"ID":"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1","Type":"ContainerStarted","Data":"68007ec0b67118411d1179cf40cafae36ac545ed762a766b8a9c84ebacbf237c"} Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.856476 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qwr2" event={"ID":"4f430e6a-b6ca-42b5-bb37-e5104bba0bd1","Type":"ContainerStarted","Data":"56559a4056310294c44c018bc2781b154117e8dd2aa24052e502fa3fada6fccf"} Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.857499 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.857746 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.873217 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.71439316 podStartE2EDuration="28.873198593s" podCreationTimestamp="2026-02-02 10:50:52 +0000 UTC" firstStartedPulling="2026-02-02 10:51:04.944722396 +0000 UTC m=+1146.036123846" lastFinishedPulling="2026-02-02 10:51:20.103527829 +0000 UTC m=+1161.194929279" observedRunningTime="2026-02-02 10:51:20.872262507 +0000 UTC m=+1161.963663957" watchObservedRunningTime="2026-02-02 10:51:20.873198593 +0000 UTC m=+1161.964600053" Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.963274 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9qwr2" podStartSLOduration=20.547287455 podStartE2EDuration="27.96325669s" podCreationTimestamp="2026-02-02 10:50:53 +0000 UTC" firstStartedPulling="2026-02-02 10:51:05.1057796 +0000 UTC m=+1146.197181050" lastFinishedPulling="2026-02-02 10:51:12.521748835 +0000 UTC m=+1153.613150285" observedRunningTime="2026-02-02 10:51:20.95158709 +0000 UTC m=+1162.042988540" watchObservedRunningTime="2026-02-02 10:51:20.96325669 +0000 UTC m=+1162.054658140" Feb 02 10:51:20 crc kubenswrapper[4845]: I0202 10:51:20.989164 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.884865431 podStartE2EDuration="25.989138122s" podCreationTimestamp="2026-02-02 10:50:55 +0000 UTC" firstStartedPulling="2026-02-02 10:51:04.99121788 +0000 UTC m=+1146.082619330" lastFinishedPulling="2026-02-02 10:51:20.095490571 +0000 UTC m=+1161.186892021" observedRunningTime="2026-02-02 10:51:20.978313066 +0000 UTC m=+1162.069714536" watchObservedRunningTime="2026-02-02 10:51:20.989138122 +0000 UTC m=+1162.080539572" Feb 02 10:51:21 crc kubenswrapper[4845]: I0202 10:51:21.654856 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 10:51:21 crc kubenswrapper[4845]: I0202 10:51:21.870931 4845 generic.go:334] "Generic (PLEG): container finished" podID="25ccf740-cc48-4863-8a7d-98548588860f" containerID="faa9153539236c39a3f27d3d9abf674976b87091642cf826d02bd0acfc5c4742" exitCode=0 Feb 02 10:51:21 crc kubenswrapper[4845]: I0202 10:51:21.871004 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25ccf740-cc48-4863-8a7d-98548588860f","Type":"ContainerDied","Data":"faa9153539236c39a3f27d3d9abf674976b87091642cf826d02bd0acfc5c4742"} Feb 02 10:51:21 crc kubenswrapper[4845]: I0202 10:51:21.874456 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0c7d4707-dfce-464f-bffe-0d543bea6299","Type":"ContainerStarted","Data":"befdcfa987dede4011ada4e909e3925039505d823a86f1056236448d85462fca"} Feb 02 10:51:21 crc kubenswrapper[4845]: I0202 10:51:21.923788 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.394425503 podStartE2EDuration="37.923764611s" podCreationTimestamp="2026-02-02 10:50:44 +0000 UTC" firstStartedPulling="2026-02-02 10:51:04.14245677 +0000 UTC m=+1145.233858220" lastFinishedPulling="2026-02-02 10:51:12.671795878 +0000 UTC m=+1153.763197328" observedRunningTime="2026-02-02 10:51:21.922475014 +0000 UTC m=+1163.013876464" watchObservedRunningTime="2026-02-02 10:51:21.923764611 +0000 UTC m=+1163.015166071" Feb 02 10:51:22 crc kubenswrapper[4845]: I0202 10:51:22.550564 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 10:51:22 crc kubenswrapper[4845]: I0202 10:51:22.883421 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25ccf740-cc48-4863-8a7d-98548588860f","Type":"ContainerStarted","Data":"57b3313ac087a2759c26c8780af3f5784d0b7477c664de8d830ec6f84c57774e"} Feb 02 10:51:22 crc kubenswrapper[4845]: I0202 10:51:22.905435 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.933759984 podStartE2EDuration="37.905412229s" podCreationTimestamp="2026-02-02 10:50:45 +0000 UTC" firstStartedPulling="2026-02-02 10:51:03.436295882 +0000 UTC m=+1144.527697332" lastFinishedPulling="2026-02-02 10:51:12.407948127 +0000 UTC m=+1153.499349577" observedRunningTime="2026-02-02 10:51:22.899935034 +0000 UTC m=+1163.991336504" watchObservedRunningTime="2026-02-02 10:51:22.905412229 +0000 UTC m=+1163.996813679" Feb 02 10:51:23 crc kubenswrapper[4845]: I0202 10:51:23.655413 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 10:51:23 crc kubenswrapper[4845]: I0202 10:51:23.694512 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 10:51:23 crc kubenswrapper[4845]: I0202 10:51:23.814859 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 10:51:23 crc kubenswrapper[4845]: I0202 10:51:23.814942 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 10:51:23 crc kubenswrapper[4845]: I0202 10:51:23.854205 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 10:51:23 crc kubenswrapper[4845]: I0202 10:51:23.926984 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 10:51:23 crc kubenswrapper[4845]: I0202 10:51:23.932930 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.228340 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9ddk"] Feb 02 10:51:24 crc kubenswrapper[4845]: E0202 10:51:24.228763 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511adc55-f919-42e9-961d-94565550d668" containerName="init" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.228782 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="511adc55-f919-42e9-961d-94565550d668" containerName="init" Feb 02 10:51:24 crc kubenswrapper[4845]: E0202 10:51:24.228813 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511adc55-f919-42e9-961d-94565550d668" containerName="dnsmasq-dns" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.228819 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="511adc55-f919-42e9-961d-94565550d668" containerName="dnsmasq-dns" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.233984 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="511adc55-f919-42e9-961d-94565550d668" containerName="dnsmasq-dns" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.235204 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.242023 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.243648 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9ddk"] Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.320569 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtlrw\" (UniqueName: \"kubernetes.io/projected/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-kube-api-access-jtlrw\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.320623 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.320661 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.320725 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-config\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.354857 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mqgrd"] Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.364608 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.371348 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.383065 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mqgrd"] Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.424692 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtlrw\" (UniqueName: \"kubernetes.io/projected/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-kube-api-access-jtlrw\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.424773 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.424848 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.424931 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-config\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.426681 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-config\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.427565 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.429976 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.474393 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtlrw\" (UniqueName: \"kubernetes.io/projected/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-kube-api-access-jtlrw\") pod \"dnsmasq-dns-7fd796d7df-l9ddk\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.503169 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9ddk"] Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.504081 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.526659 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pb2\" (UniqueName: \"kubernetes.io/projected/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-kube-api-access-x7pb2\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.527102 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-ovs-rundir\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.527210 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-config\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.527279 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.527315 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-combined-ca-bundle\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.527360 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-ovn-rundir\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.544358 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gfr4t"] Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.554342 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.572923 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.630610 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pb2\" (UniqueName: \"kubernetes.io/projected/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-kube-api-access-x7pb2\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.630739 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-ovs-rundir\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.630846 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-config\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.630926 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.632143 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-combined-ca-bundle\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.632232 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-ovn-rundir\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.632625 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-ovn-rundir\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.633199 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-ovs-rundir\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.633877 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-config\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.645543 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.647435 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gfr4t"] Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.652808 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-combined-ca-bundle\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.656285 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pb2\" (UniqueName: \"kubernetes.io/projected/0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2-kube-api-access-x7pb2\") pod \"ovn-controller-metrics-mqgrd\" (UID: \"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2\") " pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.682321 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.684456 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.687490 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.687933 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.689005 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 10:51:24 crc kubenswrapper[4845]: I0202 10:51:24.689422 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-whxz4" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.693500 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mqgrd" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.696375 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.734298 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297q5\" (UniqueName: \"kubernetes.io/projected/47456531-c404-4086-89b2-d159d71fdeb1-kube-api-access-297q5\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.734655 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-config\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.734709 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.734734 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.734760 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.836997 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j5d7\" (UniqueName: \"kubernetes.io/projected/53989098-3602-4958-96b3-ca7c539c29c9-kube-api-access-6j5d7\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837050 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53989098-3602-4958-96b3-ca7c539c29c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837069 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53989098-3602-4958-96b3-ca7c539c29c9-config\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837099 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837126 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837147 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837187 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837239 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837268 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297q5\" (UniqueName: \"kubernetes.io/projected/47456531-c404-4086-89b2-d159d71fdeb1-kube-api-access-297q5\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837341 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837379 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53989098-3602-4958-96b3-ca7c539c29c9-scripts\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.837442 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-config\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.838808 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.838797 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.838912 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.839467 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-config\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.863737 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297q5\" (UniqueName: \"kubernetes.io/projected/47456531-c404-4086-89b2-d159d71fdeb1-kube-api-access-297q5\") pod \"dnsmasq-dns-86db49b7ff-gfr4t\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.940581 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j5d7\" (UniqueName: \"kubernetes.io/projected/53989098-3602-4958-96b3-ca7c539c29c9-kube-api-access-6j5d7\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.940635 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53989098-3602-4958-96b3-ca7c539c29c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.940655 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53989098-3602-4958-96b3-ca7c539c29c9-config\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.940722 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.940793 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.940868 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.940923 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53989098-3602-4958-96b3-ca7c539c29c9-scripts\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.941117 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53989098-3602-4958-96b3-ca7c539c29c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.941598 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53989098-3602-4958-96b3-ca7c539c29c9-config\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.941708 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53989098-3602-4958-96b3-ca7c539c29c9-scripts\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.946195 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.947785 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.955801 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53989098-3602-4958-96b3-ca7c539c29c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:24.961122 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j5d7\" (UniqueName: \"kubernetes.io/projected/53989098-3602-4958-96b3-ca7c539c29c9-kube-api-access-6j5d7\") pod \"ovn-northd-0\" (UID: \"53989098-3602-4958-96b3-ca7c539c29c9\") " pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.043861 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.068539 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.734078 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.734647 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.817226 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.911063 4845 generic.go:334] "Generic (PLEG): container finished" podID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerID="f241191074a6c7fafb8932f36a972acd0bd84c8ac50c73e751b3fdd46aa2e817" exitCode=0 Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.911399 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerDied","Data":"f241191074a6c7fafb8932f36a972acd0bd84c8ac50c73e751b3fdd46aa2e817"} Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.923404 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.938511 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gfr4t"] Feb 02 10:51:25 crc kubenswrapper[4845]: W0202 10:51:25.947405 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47456531_c404_4086_89b2_d159d71fdeb1.slice/crio-51a381a3e9440b9cc201282058f7396573a6b9b5751fc506fe374daac0e1764f WatchSource:0}: Error finding container 51a381a3e9440b9cc201282058f7396573a6b9b5751fc506fe374daac0e1764f: Status 404 returned error can't find the container with id 51a381a3e9440b9cc201282058f7396573a6b9b5751fc506fe374daac0e1764f Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.957099 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9ddk"] Feb 02 10:51:25 crc kubenswrapper[4845]: W0202 10:51:25.958634 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f152c8c_6cc4_4586_9fcb_c1ddee6e81d2.slice/crio-3d6b0f98f98cd18423fd63a1ce356c926c7fbd4cb2d243c8914bd4b2ad6b46b4 WatchSource:0}: Error finding container 3d6b0f98f98cd18423fd63a1ce356c926c7fbd4cb2d243c8914bd4b2ad6b46b4: Status 404 returned error can't find the container with id 3d6b0f98f98cd18423fd63a1ce356c926c7fbd4cb2d243c8914bd4b2ad6b46b4 Feb 02 10:51:25 crc kubenswrapper[4845]: I0202 10:51:25.970310 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mqgrd"] Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.030368 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.922479 4845 generic.go:334] "Generic (PLEG): container finished" podID="db65c5b8-1fc4-43f9-89bd-51ebb710eccd" containerID="faf3833dd05c13c51f3b521ae7124b1dde7b455aa0d1bcc6b64cb4774ec7cdfd" exitCode=0 Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.923012 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" event={"ID":"db65c5b8-1fc4-43f9-89bd-51ebb710eccd","Type":"ContainerDied","Data":"faf3833dd05c13c51f3b521ae7124b1dde7b455aa0d1bcc6b64cb4774ec7cdfd"} Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.923048 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" event={"ID":"db65c5b8-1fc4-43f9-89bd-51ebb710eccd","Type":"ContainerStarted","Data":"cd82bf219d534acecb1bc063cfd4353f103c02c9ba7678657bf6ce8f14343b3f"} Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.925140 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"53989098-3602-4958-96b3-ca7c539c29c9","Type":"ContainerStarted","Data":"98a909ba82ca34783d904a217c691c1cca3ed276a0ab152887bf580551bfd39f"} Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.927759 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mqgrd" event={"ID":"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2","Type":"ContainerStarted","Data":"4fdf855357cf9832a13c007b3c91feb5764c38de34e1f8430a07358142f84beb"} Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.927831 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mqgrd" event={"ID":"0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2","Type":"ContainerStarted","Data":"3d6b0f98f98cd18423fd63a1ce356c926c7fbd4cb2d243c8914bd4b2ad6b46b4"} Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.935595 4845 generic.go:334] "Generic (PLEG): container finished" podID="47456531-c404-4086-89b2-d159d71fdeb1" containerID="30b7a3b349a6bebefe667d17eaae2b3cbdc973ed20f342ad0701818a0d0cb4b7" exitCode=0 Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.936337 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" event={"ID":"47456531-c404-4086-89b2-d159d71fdeb1","Type":"ContainerDied","Data":"30b7a3b349a6bebefe667d17eaae2b3cbdc973ed20f342ad0701818a0d0cb4b7"} Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.936393 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" event={"ID":"47456531-c404-4086-89b2-d159d71fdeb1","Type":"ContainerStarted","Data":"51a381a3e9440b9cc201282058f7396573a6b9b5751fc506fe374daac0e1764f"} Feb 02 10:51:26 crc kubenswrapper[4845]: I0202 10:51:26.994152 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mqgrd" podStartSLOduration=2.9941369780000002 podStartE2EDuration="2.994136978s" podCreationTimestamp="2026-02-02 10:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:26.979526941 +0000 UTC m=+1168.070928391" watchObservedRunningTime="2026-02-02 10:51:26.994136978 +0000 UTC m=+1168.085538428" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.197436 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6d20-account-create-update-8zrt2"] Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.198733 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.210573 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.233017 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6d20-account-create-update-8zrt2"] Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.266171 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-q8crj"] Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.267537 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.276074 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q8crj"] Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.297097 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5cd5\" (UniqueName: \"kubernetes.io/projected/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-kube-api-access-k5cd5\") pod \"keystone-6d20-account-create-update-8zrt2\" (UID: \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\") " pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.297166 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-operator-scripts\") pod \"keystone-6d20-account-create-update-8zrt2\" (UID: \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\") " pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.371949 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.372263 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.398333 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-operator-scripts\") pod \"keystone-db-create-q8crj\" (UID: \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\") " pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.398386 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5cd5\" (UniqueName: \"kubernetes.io/projected/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-kube-api-access-k5cd5\") pod \"keystone-6d20-account-create-update-8zrt2\" (UID: \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\") " pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.398427 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzb5t\" (UniqueName: \"kubernetes.io/projected/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-kube-api-access-jzb5t\") pod \"keystone-db-create-q8crj\" (UID: \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\") " pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.398558 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-operator-scripts\") pod \"keystone-6d20-account-create-update-8zrt2\" (UID: \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\") " pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.399421 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-operator-scripts\") pod \"keystone-6d20-account-create-update-8zrt2\" (UID: \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\") " pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.417776 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5cd5\" (UniqueName: \"kubernetes.io/projected/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-kube-api-access-k5cd5\") pod \"keystone-6d20-account-create-update-8zrt2\" (UID: \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\") " pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.457468 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hchq8"] Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.458832 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hchq8" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.470707 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hchq8"] Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.491265 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.500293 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-operator-scripts\") pod \"placement-db-create-hchq8\" (UID: \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\") " pod="openstack/placement-db-create-hchq8" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.500577 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6k7b\" (UniqueName: \"kubernetes.io/projected/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-kube-api-access-t6k7b\") pod \"placement-db-create-hchq8\" (UID: \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\") " pod="openstack/placement-db-create-hchq8" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.500905 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-operator-scripts\") pod \"keystone-db-create-q8crj\" (UID: \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\") " pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.501025 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzb5t\" (UniqueName: \"kubernetes.io/projected/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-kube-api-access-jzb5t\") pod \"keystone-db-create-q8crj\" (UID: \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\") " pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.501698 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-operator-scripts\") pod \"keystone-db-create-q8crj\" (UID: \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\") " pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.521938 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzb5t\" (UniqueName: \"kubernetes.io/projected/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-kube-api-access-jzb5t\") pod \"keystone-db-create-q8crj\" (UID: \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\") " pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.537636 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.566215 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-42da-account-create-update-dmqrb"] Feb 02 10:51:27 crc kubenswrapper[4845]: E0202 10:51:27.566632 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db65c5b8-1fc4-43f9-89bd-51ebb710eccd" containerName="init" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.566650 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="db65c5b8-1fc4-43f9-89bd-51ebb710eccd" containerName="init" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.566831 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="db65c5b8-1fc4-43f9-89bd-51ebb710eccd" containerName="init" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.567654 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.570730 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.581867 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-42da-account-create-update-dmqrb"] Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.589568 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.602214 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-config\") pod \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.602290 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-ovsdbserver-nb\") pod \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.602507 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtlrw\" (UniqueName: \"kubernetes.io/projected/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-kube-api-access-jtlrw\") pod \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.602564 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-dns-svc\") pod \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\" (UID: \"db65c5b8-1fc4-43f9-89bd-51ebb710eccd\") " Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.602834 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-operator-scripts\") pod \"placement-db-create-hchq8\" (UID: \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\") " pod="openstack/placement-db-create-hchq8" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.602891 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz65m\" (UniqueName: \"kubernetes.io/projected/1f4db3a3-fdab-41f0-b675-26aaaa575769-kube-api-access-vz65m\") pod \"placement-42da-account-create-update-dmqrb\" (UID: \"1f4db3a3-fdab-41f0-b675-26aaaa575769\") " pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.602928 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f4db3a3-fdab-41f0-b675-26aaaa575769-operator-scripts\") pod \"placement-42da-account-create-update-dmqrb\" (UID: \"1f4db3a3-fdab-41f0-b675-26aaaa575769\") " pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.602967 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6k7b\" (UniqueName: \"kubernetes.io/projected/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-kube-api-access-t6k7b\") pod \"placement-db-create-hchq8\" (UID: \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\") " pod="openstack/placement-db-create-hchq8" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.603852 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-operator-scripts\") pod \"placement-db-create-hchq8\" (UID: \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\") " pod="openstack/placement-db-create-hchq8" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.611079 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-kube-api-access-jtlrw" (OuterVolumeSpecName: "kube-api-access-jtlrw") pod "db65c5b8-1fc4-43f9-89bd-51ebb710eccd" (UID: "db65c5b8-1fc4-43f9-89bd-51ebb710eccd"). InnerVolumeSpecName "kube-api-access-jtlrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.622657 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6k7b\" (UniqueName: \"kubernetes.io/projected/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-kube-api-access-t6k7b\") pod \"placement-db-create-hchq8\" (UID: \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\") " pod="openstack/placement-db-create-hchq8" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.707677 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz65m\" (UniqueName: \"kubernetes.io/projected/1f4db3a3-fdab-41f0-b675-26aaaa575769-kube-api-access-vz65m\") pod \"placement-42da-account-create-update-dmqrb\" (UID: \"1f4db3a3-fdab-41f0-b675-26aaaa575769\") " pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.707730 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f4db3a3-fdab-41f0-b675-26aaaa575769-operator-scripts\") pod \"placement-42da-account-create-update-dmqrb\" (UID: \"1f4db3a3-fdab-41f0-b675-26aaaa575769\") " pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.707943 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtlrw\" (UniqueName: \"kubernetes.io/projected/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-kube-api-access-jtlrw\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.708851 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f4db3a3-fdab-41f0-b675-26aaaa575769-operator-scripts\") pod \"placement-42da-account-create-update-dmqrb\" (UID: \"1f4db3a3-fdab-41f0-b675-26aaaa575769\") " pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.711120 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-config" (OuterVolumeSpecName: "config") pod "db65c5b8-1fc4-43f9-89bd-51ebb710eccd" (UID: "db65c5b8-1fc4-43f9-89bd-51ebb710eccd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.711281 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db65c5b8-1fc4-43f9-89bd-51ebb710eccd" (UID: "db65c5b8-1fc4-43f9-89bd-51ebb710eccd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.735179 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz65m\" (UniqueName: \"kubernetes.io/projected/1f4db3a3-fdab-41f0-b675-26aaaa575769-kube-api-access-vz65m\") pod \"placement-42da-account-create-update-dmqrb\" (UID: \"1f4db3a3-fdab-41f0-b675-26aaaa575769\") " pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.739283 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db65c5b8-1fc4-43f9-89bd-51ebb710eccd" (UID: "db65c5b8-1fc4-43f9-89bd-51ebb710eccd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.810141 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.810450 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.810464 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db65c5b8-1fc4-43f9-89bd-51ebb710eccd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.821729 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hchq8" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.828732 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.962008 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" event={"ID":"db65c5b8-1fc4-43f9-89bd-51ebb710eccd","Type":"ContainerDied","Data":"cd82bf219d534acecb1bc063cfd4353f103c02c9ba7678657bf6ce8f14343b3f"} Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.962080 4845 scope.go:117] "RemoveContainer" containerID="faf3833dd05c13c51f3b521ae7124b1dde7b455aa0d1bcc6b64cb4774ec7cdfd" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.962225 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l9ddk" Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.987140 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" event={"ID":"47456531-c404-4086-89b2-d159d71fdeb1","Type":"ContainerStarted","Data":"807d2bbccada01aa21068396fdc8f2bcadc102372f4ecccdc24ca17e629e8a36"} Feb 02 10:51:27 crc kubenswrapper[4845]: I0202 10:51:27.987339 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.013467 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" podStartSLOduration=4.013438911 podStartE2EDuration="4.013438911s" podCreationTimestamp="2026-02-02 10:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:28.006923315 +0000 UTC m=+1169.098324765" watchObservedRunningTime="2026-02-02 10:51:28.013438911 +0000 UTC m=+1169.104840381" Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.066938 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9ddk"] Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.072967 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l9ddk"] Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.173997 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q8crj"] Feb 02 10:51:28 crc kubenswrapper[4845]: W0202 10:51:28.201310 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c7eb05_f8ef_40a5_b799_af8bfdfd9c4e.slice/crio-73664e3bb52176d824352579e8180b74b675bbf73a6874493edb31b304c82035 WatchSource:0}: Error finding container 73664e3bb52176d824352579e8180b74b675bbf73a6874493edb31b304c82035: Status 404 returned error can't find the container with id 73664e3bb52176d824352579e8180b74b675bbf73a6874493edb31b304c82035 Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.282455 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6d20-account-create-update-8zrt2"] Feb 02 10:51:28 crc kubenswrapper[4845]: W0202 10:51:28.290547 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod802ba94f_17f1_4eed_93aa_95e5ffe1ea43.slice/crio-55a87cbf22b74d58f38660dbd860ecac7e0253e0080661b0a82322c06749f43b WatchSource:0}: Error finding container 55a87cbf22b74d58f38660dbd860ecac7e0253e0080661b0a82322c06749f43b: Status 404 returned error can't find the container with id 55a87cbf22b74d58f38660dbd860ecac7e0253e0080661b0a82322c06749f43b Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.454854 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-42da-account-create-update-dmqrb"] Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.467149 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hchq8"] Feb 02 10:51:28 crc kubenswrapper[4845]: W0202 10:51:28.477693 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05069a45_f3d6_43e9_bf29_2e3a3cbcc2d8.slice/crio-9cb9c4d101e2d4087cd50fbee98f35575a3990350ca5373046dc6cec8a9f46fe WatchSource:0}: Error finding container 9cb9c4d101e2d4087cd50fbee98f35575a3990350ca5373046dc6cec8a9f46fe: Status 404 returned error can't find the container with id 9cb9c4d101e2d4087cd50fbee98f35575a3990350ca5373046dc6cec8a9f46fe Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.997298 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"53989098-3602-4958-96b3-ca7c539c29c9","Type":"ContainerStarted","Data":"6ec8de5a3c76a8cb59a7c822a25eb9c56d22cb083abcbfa3c4d66b41f0e8f30e"} Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.997366 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"53989098-3602-4958-96b3-ca7c539c29c9","Type":"ContainerStarted","Data":"e866c831e8a5fb4477a08a67dda617fad70da77f2dc1c392e2e10efc0f5560a9"} Feb 02 10:51:28 crc kubenswrapper[4845]: I0202 10:51:28.999806 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.001871 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hchq8" event={"ID":"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8","Type":"ContainerStarted","Data":"2b3f2d6cbc2fbaafd3e7acd158c2d8862a6fa7d667477c7485ce11aa580584b6"} Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.001920 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hchq8" event={"ID":"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8","Type":"ContainerStarted","Data":"9cb9c4d101e2d4087cd50fbee98f35575a3990350ca5373046dc6cec8a9f46fe"} Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.005864 4845 generic.go:334] "Generic (PLEG): container finished" podID="802ba94f-17f1-4eed-93aa-95e5ffe1ea43" containerID="f34968fe05a8bf94342bdc3d85ae1b9aa88e7cf9dc5bd5dd49c6ff1a1947185f" exitCode=0 Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.006047 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d20-account-create-update-8zrt2" event={"ID":"802ba94f-17f1-4eed-93aa-95e5ffe1ea43","Type":"ContainerDied","Data":"f34968fe05a8bf94342bdc3d85ae1b9aa88e7cf9dc5bd5dd49c6ff1a1947185f"} Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.006149 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d20-account-create-update-8zrt2" event={"ID":"802ba94f-17f1-4eed-93aa-95e5ffe1ea43","Type":"ContainerStarted","Data":"55a87cbf22b74d58f38660dbd860ecac7e0253e0080661b0a82322c06749f43b"} Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.011300 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-42da-account-create-update-dmqrb" event={"ID":"1f4db3a3-fdab-41f0-b675-26aaaa575769","Type":"ContainerStarted","Data":"a6fc1e9766e80a7882547aca33a905323116744093fc66e9ca25989843c77b7a"} Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.011360 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-42da-account-create-update-dmqrb" event={"ID":"1f4db3a3-fdab-41f0-b675-26aaaa575769","Type":"ContainerStarted","Data":"91e748b8a84823fc9d68b855d19651a147634d90eb6ab4b1022d6097d05ec54c"} Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.016451 4845 generic.go:334] "Generic (PLEG): container finished" podID="82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e" containerID="4440ff2b707ea7d06429dcadf4f2755be4ff5fe9e3d35fbb9b3d5449440ebcdb" exitCode=0 Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.016786 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q8crj" event={"ID":"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e","Type":"ContainerDied","Data":"4440ff2b707ea7d06429dcadf4f2755be4ff5fe9e3d35fbb9b3d5449440ebcdb"} Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.016829 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q8crj" event={"ID":"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e","Type":"ContainerStarted","Data":"73664e3bb52176d824352579e8180b74b675bbf73a6874493edb31b304c82035"} Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.034106 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.355527538 podStartE2EDuration="5.034082963s" podCreationTimestamp="2026-02-02 10:51:24 +0000 UTC" firstStartedPulling="2026-02-02 10:51:25.938353064 +0000 UTC m=+1167.029754514" lastFinishedPulling="2026-02-02 10:51:27.616908479 +0000 UTC m=+1168.708309939" observedRunningTime="2026-02-02 10:51:29.025366874 +0000 UTC m=+1170.116768324" watchObservedRunningTime="2026-02-02 10:51:29.034082963 +0000 UTC m=+1170.125484413" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.046824 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-42da-account-create-update-dmqrb" podStartSLOduration=2.046804556 podStartE2EDuration="2.046804556s" podCreationTimestamp="2026-02-02 10:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:29.040011772 +0000 UTC m=+1170.131413222" watchObservedRunningTime="2026-02-02 10:51:29.046804556 +0000 UTC m=+1170.138206006" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.059286 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-hchq8" podStartSLOduration=2.059265672 podStartE2EDuration="2.059265672s" podCreationTimestamp="2026-02-02 10:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:29.055852255 +0000 UTC m=+1170.147253705" watchObservedRunningTime="2026-02-02 10:51:29.059265672 +0000 UTC m=+1170.150667122" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.295938 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-wlplx"] Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.303375 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.313625 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-wlplx"] Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.348826 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9z2\" (UniqueName: \"kubernetes.io/projected/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-kube-api-access-5c9z2\") pod \"mysqld-exporter-openstack-db-create-wlplx\" (UID: \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\") " pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.348998 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-wlplx\" (UID: \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\") " pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.386577 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gfr4t"] Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.421147 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7lh98"] Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.426850 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.451101 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-wlplx\" (UID: \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\") " pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.451129 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7lh98"] Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.451286 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9z2\" (UniqueName: \"kubernetes.io/projected/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-kube-api-access-5c9z2\") pod \"mysqld-exporter-openstack-db-create-wlplx\" (UID: \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\") " pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.452524 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-wlplx\" (UID: \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\") " pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.499191 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-2fba-account-create-update-57wqb"] Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.500966 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.505156 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.517580 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9z2\" (UniqueName: \"kubernetes.io/projected/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-kube-api-access-5c9z2\") pod \"mysqld-exporter-openstack-db-create-wlplx\" (UID: \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\") " pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.562662 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-2fba-account-create-update-57wqb"] Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.563715 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc354af6-cf06-4532-83c7-845e6f8f41c5-operator-scripts\") pod \"mysqld-exporter-2fba-account-create-update-57wqb\" (UID: \"fc354af6-cf06-4532-83c7-845e6f8f41c5\") " pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.563763 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvmd\" (UniqueName: \"kubernetes.io/projected/fc354af6-cf06-4532-83c7-845e6f8f41c5-kube-api-access-lkvmd\") pod \"mysqld-exporter-2fba-account-create-update-57wqb\" (UID: \"fc354af6-cf06-4532-83c7-845e6f8f41c5\") " pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.563841 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-config\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.563949 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.563981 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-dns-svc\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.564040 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.564073 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zllkd\" (UniqueName: \"kubernetes.io/projected/125bfda8-e971-4249-8b07-0bbff61e4725-kube-api-access-zllkd\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.625598 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.672060 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zllkd\" (UniqueName: \"kubernetes.io/projected/125bfda8-e971-4249-8b07-0bbff61e4725-kube-api-access-zllkd\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.672120 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc354af6-cf06-4532-83c7-845e6f8f41c5-operator-scripts\") pod \"mysqld-exporter-2fba-account-create-update-57wqb\" (UID: \"fc354af6-cf06-4532-83c7-845e6f8f41c5\") " pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.672146 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvmd\" (UniqueName: \"kubernetes.io/projected/fc354af6-cf06-4532-83c7-845e6f8f41c5-kube-api-access-lkvmd\") pod \"mysqld-exporter-2fba-account-create-update-57wqb\" (UID: \"fc354af6-cf06-4532-83c7-845e6f8f41c5\") " pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.672221 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-config\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.672284 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.672312 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-dns-svc\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.672354 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.673230 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.673840 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-config\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.674275 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.690139 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc354af6-cf06-4532-83c7-845e6f8f41c5-operator-scripts\") pod \"mysqld-exporter-2fba-account-create-update-57wqb\" (UID: \"fc354af6-cf06-4532-83c7-845e6f8f41c5\") " pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.691426 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-dns-svc\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.723495 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvmd\" (UniqueName: \"kubernetes.io/projected/fc354af6-cf06-4532-83c7-845e6f8f41c5-kube-api-access-lkvmd\") pod \"mysqld-exporter-2fba-account-create-update-57wqb\" (UID: \"fc354af6-cf06-4532-83c7-845e6f8f41c5\") " pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.728627 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zllkd\" (UniqueName: \"kubernetes.io/projected/125bfda8-e971-4249-8b07-0bbff61e4725-kube-api-access-zllkd\") pod \"dnsmasq-dns-698758b865-7lh98\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.762012 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.775301 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db65c5b8-1fc4-43f9-89bd-51ebb710eccd" path="/var/lib/kubelet/pods/db65c5b8-1fc4-43f9-89bd-51ebb710eccd/volumes" Feb 02 10:51:29 crc kubenswrapper[4845]: I0202 10:51:29.900713 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.039088 4845 generic.go:334] "Generic (PLEG): container finished" podID="05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8" containerID="2b3f2d6cbc2fbaafd3e7acd158c2d8862a6fa7d667477c7485ce11aa580584b6" exitCode=0 Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.039168 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hchq8" event={"ID":"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8","Type":"ContainerDied","Data":"2b3f2d6cbc2fbaafd3e7acd158c2d8862a6fa7d667477c7485ce11aa580584b6"} Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.069663 4845 generic.go:334] "Generic (PLEG): container finished" podID="1f4db3a3-fdab-41f0-b675-26aaaa575769" containerID="a6fc1e9766e80a7882547aca33a905323116744093fc66e9ca25989843c77b7a" exitCode=0 Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.070048 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" podUID="47456531-c404-4086-89b2-d159d71fdeb1" containerName="dnsmasq-dns" containerID="cri-o://807d2bbccada01aa21068396fdc8f2bcadc102372f4ecccdc24ca17e629e8a36" gracePeriod=10 Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.070036 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-42da-account-create-update-dmqrb" event={"ID":"1f4db3a3-fdab-41f0-b675-26aaaa575769","Type":"ContainerDied","Data":"a6fc1e9766e80a7882547aca33a905323116744093fc66e9ca25989843c77b7a"} Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.151767 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.305580 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.564190 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.571328 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.573577 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.573791 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.579945 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-jqmgx" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.580182 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.584064 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.703362 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6db6e42-984a-484b-9f90-e6efa9817f37-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.703441 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvwnk\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-kube-api-access-cvwnk\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.703484 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aeab7da0-b130-4af5-8fde-0e9836ac2c44\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aeab7da0-b130-4af5-8fde-0e9836ac2c44\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.703568 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d6db6e42-984a-484b-9f90-e6efa9817f37-lock\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.703605 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d6db6e42-984a-484b-9f90-e6efa9817f37-cache\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.703809 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.807402 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvwnk\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-kube-api-access-cvwnk\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.807672 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aeab7da0-b130-4af5-8fde-0e9836ac2c44\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aeab7da0-b130-4af5-8fde-0e9836ac2c44\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.807874 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d6db6e42-984a-484b-9f90-e6efa9817f37-lock\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.807956 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d6db6e42-984a-484b-9f90-e6efa9817f37-cache\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.808042 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.808083 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6db6e42-984a-484b-9f90-e6efa9817f37-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: E0202 10:51:30.811429 4845 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 10:51:30 crc kubenswrapper[4845]: E0202 10:51:30.811458 4845 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 10:51:30 crc kubenswrapper[4845]: E0202 10:51:30.811506 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift podName:d6db6e42-984a-484b-9f90-e6efa9817f37 nodeName:}" failed. No retries permitted until 2026-02-02 10:51:31.311485491 +0000 UTC m=+1172.402886991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift") pod "swift-storage-0" (UID: "d6db6e42-984a-484b-9f90-e6efa9817f37") : configmap "swift-ring-files" not found Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.811692 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d6db6e42-984a-484b-9f90-e6efa9817f37-cache\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.811714 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d6db6e42-984a-484b-9f90-e6efa9817f37-lock\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.816614 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.816653 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aeab7da0-b130-4af5-8fde-0e9836ac2c44\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aeab7da0-b130-4af5-8fde-0e9836ac2c44\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/889f87c871906604b344aa5a2d9d655cfec3434c73aede0f647c6d1c9bfbfe68/globalmount\"" pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.816992 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6db6e42-984a-484b-9f90-e6efa9817f37-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.826735 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvwnk\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-kube-api-access-cvwnk\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.827871 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-wlplx"] Feb 02 10:51:30 crc kubenswrapper[4845]: W0202 10:51:30.828454 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d4f7cb3_0991_4ce6_a69d_fd6f17bbc2fc.slice/crio-2a68ae9cc8e3b067fc0a7eef64277e6ebec856643c581c42297e123e19818a08 WatchSource:0}: Error finding container 2a68ae9cc8e3b067fc0a7eef64277e6ebec856643c581c42297e123e19818a08: Status 404 returned error can't find the container with id 2a68ae9cc8e3b067fc0a7eef64277e6ebec856643c581c42297e123e19818a08 Feb 02 10:51:30 crc kubenswrapper[4845]: I0202 10:51:30.877299 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aeab7da0-b130-4af5-8fde-0e9836ac2c44\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aeab7da0-b130-4af5-8fde-0e9836ac2c44\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.089036 4845 generic.go:334] "Generic (PLEG): container finished" podID="47456531-c404-4086-89b2-d159d71fdeb1" containerID="807d2bbccada01aa21068396fdc8f2bcadc102372f4ecccdc24ca17e629e8a36" exitCode=0 Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.089735 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" event={"ID":"47456531-c404-4086-89b2-d159d71fdeb1","Type":"ContainerDied","Data":"807d2bbccada01aa21068396fdc8f2bcadc102372f4ecccdc24ca17e629e8a36"} Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.092186 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d20-account-create-update-8zrt2" event={"ID":"802ba94f-17f1-4eed-93aa-95e5ffe1ea43","Type":"ContainerDied","Data":"55a87cbf22b74d58f38660dbd860ecac7e0253e0080661b0a82322c06749f43b"} Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.092226 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a87cbf22b74d58f38660dbd860ecac7e0253e0080661b0a82322c06749f43b" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.104972 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-wlplx" event={"ID":"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc","Type":"ContainerStarted","Data":"2a68ae9cc8e3b067fc0a7eef64277e6ebec856643c581c42297e123e19818a08"} Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.108721 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7lh98"] Feb 02 10:51:31 crc kubenswrapper[4845]: W0202 10:51:31.118002 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc354af6_cf06_4532_83c7_845e6f8f41c5.slice/crio-2a5d308418e3df9d85a25d801311a5f584b57fecacd8d2dc49cfb886adff6b11 WatchSource:0}: Error finding container 2a5d308418e3df9d85a25d801311a5f584b57fecacd8d2dc49cfb886adff6b11: Status 404 returned error can't find the container with id 2a5d308418e3df9d85a25d801311a5f584b57fecacd8d2dc49cfb886adff6b11 Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.121951 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-2fba-account-create-update-57wqb"] Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.122189 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q8crj" event={"ID":"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e","Type":"ContainerDied","Data":"73664e3bb52176d824352579e8180b74b675bbf73a6874493edb31b304c82035"} Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.122226 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73664e3bb52176d824352579e8180b74b675bbf73a6874493edb31b304c82035" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.169064 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.181795 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.340353 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzb5t\" (UniqueName: \"kubernetes.io/projected/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-kube-api-access-jzb5t\") pod \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\" (UID: \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\") " Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.340451 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5cd5\" (UniqueName: \"kubernetes.io/projected/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-kube-api-access-k5cd5\") pod \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\" (UID: \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\") " Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.340470 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-operator-scripts\") pod \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\" (UID: \"802ba94f-17f1-4eed-93aa-95e5ffe1ea43\") " Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.340545 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-operator-scripts\") pod \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\" (UID: \"82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e\") " Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.340980 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:31 crc kubenswrapper[4845]: E0202 10:51:31.341229 4845 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 10:51:31 crc kubenswrapper[4845]: E0202 10:51:31.341256 4845 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 10:51:31 crc kubenswrapper[4845]: E0202 10:51:31.341307 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift podName:d6db6e42-984a-484b-9f90-e6efa9817f37 nodeName:}" failed. No retries permitted until 2026-02-02 10:51:32.341288799 +0000 UTC m=+1173.432690249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift") pod "swift-storage-0" (UID: "d6db6e42-984a-484b-9f90-e6efa9817f37") : configmap "swift-ring-files" not found Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.343747 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "802ba94f-17f1-4eed-93aa-95e5ffe1ea43" (UID: "802ba94f-17f1-4eed-93aa-95e5ffe1ea43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.344000 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e" (UID: "82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.364571 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-kube-api-access-jzb5t" (OuterVolumeSpecName: "kube-api-access-jzb5t") pod "82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e" (UID: "82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e"). InnerVolumeSpecName "kube-api-access-jzb5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.371731 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-kube-api-access-k5cd5" (OuterVolumeSpecName: "kube-api-access-k5cd5") pod "802ba94f-17f1-4eed-93aa-95e5ffe1ea43" (UID: "802ba94f-17f1-4eed-93aa-95e5ffe1ea43"). InnerVolumeSpecName "kube-api-access-k5cd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.443467 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzb5t\" (UniqueName: \"kubernetes.io/projected/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-kube-api-access-jzb5t\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.443505 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.443519 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5cd5\" (UniqueName: \"kubernetes.io/projected/802ba94f-17f1-4eed-93aa-95e5ffe1ea43-kube-api-access-k5cd5\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:31 crc kubenswrapper[4845]: I0202 10:51:31.443530 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.046201 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hchq8" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.065636 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.091796 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.138049 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" event={"ID":"47456531-c404-4086-89b2-d159d71fdeb1","Type":"ContainerDied","Data":"51a381a3e9440b9cc201282058f7396573a6b9b5751fc506fe374daac0e1764f"} Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.138081 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gfr4t" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.138125 4845 scope.go:117] "RemoveContainer" containerID="807d2bbccada01aa21068396fdc8f2bcadc102372f4ecccdc24ca17e629e8a36" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.143384 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hchq8" event={"ID":"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8","Type":"ContainerDied","Data":"9cb9c4d101e2d4087cd50fbee98f35575a3990350ca5373046dc6cec8a9f46fe"} Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.143421 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cb9c4d101e2d4087cd50fbee98f35575a3990350ca5373046dc6cec8a9f46fe" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.143478 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hchq8" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.158665 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-42da-account-create-update-dmqrb" event={"ID":"1f4db3a3-fdab-41f0-b675-26aaaa575769","Type":"ContainerDied","Data":"91e748b8a84823fc9d68b855d19651a147634d90eb6ab4b1022d6097d05ec54c"} Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.158704 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e748b8a84823fc9d68b855d19651a147634d90eb6ab4b1022d6097d05ec54c" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.158797 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-42da-account-create-update-dmqrb" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.160547 4845 generic.go:334] "Generic (PLEG): container finished" podID="8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc" containerID="2206de97b65437900f5876967e6088126d1863f30884da3104dd45175b5b4a13" exitCode=0 Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.160593 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-wlplx" event={"ID":"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc","Type":"ContainerDied","Data":"2206de97b65437900f5876967e6088126d1863f30884da3104dd45175b5b4a13"} Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.162555 4845 generic.go:334] "Generic (PLEG): container finished" podID="fc354af6-cf06-4532-83c7-845e6f8f41c5" containerID="a93a594f3ef1f5fe22327995e92493ed98d1dba81262b5bcf617c4c84d0e3aba" exitCode=0 Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.162612 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" event={"ID":"fc354af6-cf06-4532-83c7-845e6f8f41c5","Type":"ContainerDied","Data":"a93a594f3ef1f5fe22327995e92493ed98d1dba81262b5bcf617c4c84d0e3aba"} Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.162630 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" event={"ID":"fc354af6-cf06-4532-83c7-845e6f8f41c5","Type":"ContainerStarted","Data":"2a5d308418e3df9d85a25d801311a5f584b57fecacd8d2dc49cfb886adff6b11"} Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.163779 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6k7b\" (UniqueName: \"kubernetes.io/projected/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-kube-api-access-t6k7b\") pod \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\" (UID: \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.163870 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-operator-scripts\") pod \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\" (UID: \"05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.165343 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8" (UID: "05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.170150 4845 generic.go:334] "Generic (PLEG): container finished" podID="125bfda8-e971-4249-8b07-0bbff61e4725" containerID="ddd6909bbdf16b8af6fed8c71335e6bc1892bf152856011ceb433a2a497011e2" exitCode=0 Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.170233 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q8crj" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.170571 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7lh98" event={"ID":"125bfda8-e971-4249-8b07-0bbff61e4725","Type":"ContainerDied","Data":"ddd6909bbdf16b8af6fed8c71335e6bc1892bf152856011ceb433a2a497011e2"} Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.170618 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7lh98" event={"ID":"125bfda8-e971-4249-8b07-0bbff61e4725","Type":"ContainerStarted","Data":"d8710db5f1971bcb1ada6e2682b3528a8c529ad636b2e603fac42dddaaffa6b0"} Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.170668 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d20-account-create-update-8zrt2" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.173196 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-kube-api-access-t6k7b" (OuterVolumeSpecName: "kube-api-access-t6k7b") pod "05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8" (UID: "05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8"). InnerVolumeSpecName "kube-api-access-t6k7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.269523 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-config\") pod \"47456531-c404-4086-89b2-d159d71fdeb1\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.269633 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297q5\" (UniqueName: \"kubernetes.io/projected/47456531-c404-4086-89b2-d159d71fdeb1-kube-api-access-297q5\") pod \"47456531-c404-4086-89b2-d159d71fdeb1\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.269667 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-dns-svc\") pod \"47456531-c404-4086-89b2-d159d71fdeb1\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.269711 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz65m\" (UniqueName: \"kubernetes.io/projected/1f4db3a3-fdab-41f0-b675-26aaaa575769-kube-api-access-vz65m\") pod \"1f4db3a3-fdab-41f0-b675-26aaaa575769\" (UID: \"1f4db3a3-fdab-41f0-b675-26aaaa575769\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.269828 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f4db3a3-fdab-41f0-b675-26aaaa575769-operator-scripts\") pod \"1f4db3a3-fdab-41f0-b675-26aaaa575769\" (UID: \"1f4db3a3-fdab-41f0-b675-26aaaa575769\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.269854 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-nb\") pod \"47456531-c404-4086-89b2-d159d71fdeb1\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.270505 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-sb\") pod \"47456531-c404-4086-89b2-d159d71fdeb1\" (UID: \"47456531-c404-4086-89b2-d159d71fdeb1\") " Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.271151 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.271173 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6k7b\" (UniqueName: \"kubernetes.io/projected/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8-kube-api-access-t6k7b\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.272546 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4db3a3-fdab-41f0-b675-26aaaa575769-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f4db3a3-fdab-41f0-b675-26aaaa575769" (UID: "1f4db3a3-fdab-41f0-b675-26aaaa575769"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.276773 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4db3a3-fdab-41f0-b675-26aaaa575769-kube-api-access-vz65m" (OuterVolumeSpecName: "kube-api-access-vz65m") pod "1f4db3a3-fdab-41f0-b675-26aaaa575769" (UID: "1f4db3a3-fdab-41f0-b675-26aaaa575769"). InnerVolumeSpecName "kube-api-access-vz65m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.282297 4845 scope.go:117] "RemoveContainer" containerID="30b7a3b349a6bebefe667d17eaae2b3cbdc973ed20f342ad0701818a0d0cb4b7" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.295132 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47456531-c404-4086-89b2-d159d71fdeb1-kube-api-access-297q5" (OuterVolumeSpecName: "kube-api-access-297q5") pod "47456531-c404-4086-89b2-d159d71fdeb1" (UID: "47456531-c404-4086-89b2-d159d71fdeb1"). InnerVolumeSpecName "kube-api-access-297q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.416218 4845 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.416276 4845 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.416373 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift podName:d6db6e42-984a-484b-9f90-e6efa9817f37 nodeName:}" failed. No retries permitted until 2026-02-02 10:51:34.416339664 +0000 UTC m=+1175.507741124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift") pod "swift-storage-0" (UID: "d6db6e42-984a-484b-9f90-e6efa9817f37") : configmap "swift-ring-files" not found Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.412094 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.447335 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297q5\" (UniqueName: \"kubernetes.io/projected/47456531-c404-4086-89b2-d159d71fdeb1-kube-api-access-297q5\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.447377 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz65m\" (UniqueName: \"kubernetes.io/projected/1f4db3a3-fdab-41f0-b675-26aaaa575769-kube-api-access-vz65m\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.447406 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f4db3a3-fdab-41f0-b675-26aaaa575769-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.455399 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47456531-c404-4086-89b2-d159d71fdeb1" (UID: "47456531-c404-4086-89b2-d159d71fdeb1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.500427 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47456531-c404-4086-89b2-d159d71fdeb1" (UID: "47456531-c404-4086-89b2-d159d71fdeb1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.501005 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-config" (OuterVolumeSpecName: "config") pod "47456531-c404-4086-89b2-d159d71fdeb1" (UID: "47456531-c404-4086-89b2-d159d71fdeb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.540540 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47456531-c404-4086-89b2-d159d71fdeb1" (UID: "47456531-c404-4086-89b2-d159d71fdeb1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.553812 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.553846 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.553872 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.553893 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47456531-c404-4086-89b2-d159d71fdeb1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.725154 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4gj6v"] Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.725952 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8" containerName="mariadb-database-create" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.725975 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8" containerName="mariadb-database-create" Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.726002 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802ba94f-17f1-4eed-93aa-95e5ffe1ea43" containerName="mariadb-account-create-update" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726011 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="802ba94f-17f1-4eed-93aa-95e5ffe1ea43" containerName="mariadb-account-create-update" Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.726036 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47456531-c404-4086-89b2-d159d71fdeb1" containerName="init" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726045 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="47456531-c404-4086-89b2-d159d71fdeb1" containerName="init" Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.726060 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47456531-c404-4086-89b2-d159d71fdeb1" containerName="dnsmasq-dns" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726071 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="47456531-c404-4086-89b2-d159d71fdeb1" containerName="dnsmasq-dns" Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.726086 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e" containerName="mariadb-database-create" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726095 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e" containerName="mariadb-database-create" Feb 02 10:51:32 crc kubenswrapper[4845]: E0202 10:51:32.726116 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4db3a3-fdab-41f0-b675-26aaaa575769" containerName="mariadb-account-create-update" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726125 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4db3a3-fdab-41f0-b675-26aaaa575769" containerName="mariadb-account-create-update" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726396 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8" containerName="mariadb-database-create" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726423 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4db3a3-fdab-41f0-b675-26aaaa575769" containerName="mariadb-account-create-update" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726439 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="47456531-c404-4086-89b2-d159d71fdeb1" containerName="dnsmasq-dns" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726476 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e" containerName="mariadb-database-create" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.726493 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="802ba94f-17f1-4eed-93aa-95e5ffe1ea43" containerName="mariadb-account-create-update" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.727419 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.741819 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4gj6v"] Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.788443 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1e2c-account-create-update-jxgpc"] Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.790128 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.792758 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.802584 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1e2c-account-create-update-jxgpc"] Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.850115 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gfr4t"] Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.863148 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwsf\" (UniqueName: \"kubernetes.io/projected/13911fd9-043e-424e-ba84-da6af616a202-kube-api-access-7rwsf\") pod \"glance-1e2c-account-create-update-jxgpc\" (UID: \"13911fd9-043e-424e-ba84-da6af616a202\") " pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.863335 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxqp\" (UniqueName: \"kubernetes.io/projected/aa851884-d67b-4c70-8ad6-9dcf92001aa5-kube-api-access-zvxqp\") pod \"glance-db-create-4gj6v\" (UID: \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\") " pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.863592 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa851884-d67b-4c70-8ad6-9dcf92001aa5-operator-scripts\") pod \"glance-db-create-4gj6v\" (UID: \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\") " pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.863643 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13911fd9-043e-424e-ba84-da6af616a202-operator-scripts\") pod \"glance-1e2c-account-create-update-jxgpc\" (UID: \"13911fd9-043e-424e-ba84-da6af616a202\") " pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.871803 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gfr4t"] Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.965707 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa851884-d67b-4c70-8ad6-9dcf92001aa5-operator-scripts\") pod \"glance-db-create-4gj6v\" (UID: \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\") " pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.965797 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13911fd9-043e-424e-ba84-da6af616a202-operator-scripts\") pod \"glance-1e2c-account-create-update-jxgpc\" (UID: \"13911fd9-043e-424e-ba84-da6af616a202\") " pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.965876 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rwsf\" (UniqueName: \"kubernetes.io/projected/13911fd9-043e-424e-ba84-da6af616a202-kube-api-access-7rwsf\") pod \"glance-1e2c-account-create-update-jxgpc\" (UID: \"13911fd9-043e-424e-ba84-da6af616a202\") " pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.965980 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvxqp\" (UniqueName: \"kubernetes.io/projected/aa851884-d67b-4c70-8ad6-9dcf92001aa5-kube-api-access-zvxqp\") pod \"glance-db-create-4gj6v\" (UID: \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\") " pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.966612 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13911fd9-043e-424e-ba84-da6af616a202-operator-scripts\") pod \"glance-1e2c-account-create-update-jxgpc\" (UID: \"13911fd9-043e-424e-ba84-da6af616a202\") " pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.967242 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa851884-d67b-4c70-8ad6-9dcf92001aa5-operator-scripts\") pod \"glance-db-create-4gj6v\" (UID: \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\") " pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.983824 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvxqp\" (UniqueName: \"kubernetes.io/projected/aa851884-d67b-4c70-8ad6-9dcf92001aa5-kube-api-access-zvxqp\") pod \"glance-db-create-4gj6v\" (UID: \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\") " pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:32 crc kubenswrapper[4845]: I0202 10:51:32.986583 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rwsf\" (UniqueName: \"kubernetes.io/projected/13911fd9-043e-424e-ba84-da6af616a202-kube-api-access-7rwsf\") pod \"glance-1e2c-account-create-update-jxgpc\" (UID: \"13911fd9-043e-424e-ba84-da6af616a202\") " pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.110359 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.121792 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.188664 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7lh98" event={"ID":"125bfda8-e971-4249-8b07-0bbff61e4725","Type":"ContainerStarted","Data":"7b934d9dc539ae78cc75a18c8a04ead9cef0ee49e2411933655a99884f524af5"} Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.189916 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.690123 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.710476 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7lh98" podStartSLOduration=4.710456273 podStartE2EDuration="4.710456273s" podCreationTimestamp="2026-02-02 10:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:33.214100851 +0000 UTC m=+1174.305502301" watchObservedRunningTime="2026-02-02 10:51:33.710456273 +0000 UTC m=+1174.801857733" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.733484 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47456531-c404-4086-89b2-d159d71fdeb1" path="/var/lib/kubelet/pods/47456531-c404-4086-89b2-d159d71fdeb1/volumes" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.785313 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc354af6-cf06-4532-83c7-845e6f8f41c5-operator-scripts\") pod \"fc354af6-cf06-4532-83c7-845e6f8f41c5\" (UID: \"fc354af6-cf06-4532-83c7-845e6f8f41c5\") " Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.785434 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkvmd\" (UniqueName: \"kubernetes.io/projected/fc354af6-cf06-4532-83c7-845e6f8f41c5-kube-api-access-lkvmd\") pod \"fc354af6-cf06-4532-83c7-845e6f8f41c5\" (UID: \"fc354af6-cf06-4532-83c7-845e6f8f41c5\") " Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.787190 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc354af6-cf06-4532-83c7-845e6f8f41c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc354af6-cf06-4532-83c7-845e6f8f41c5" (UID: "fc354af6-cf06-4532-83c7-845e6f8f41c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.793112 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc354af6-cf06-4532-83c7-845e6f8f41c5-kube-api-access-lkvmd" (OuterVolumeSpecName: "kube-api-access-lkvmd") pod "fc354af6-cf06-4532-83c7-845e6f8f41c5" (UID: "fc354af6-cf06-4532-83c7-845e6f8f41c5"). InnerVolumeSpecName "kube-api-access-lkvmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.828090 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4gj6v"] Feb 02 10:51:33 crc kubenswrapper[4845]: W0202 10:51:33.833387 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa851884_d67b_4c70_8ad6_9dcf92001aa5.slice/crio-7585d99dc88ad40cca31f5ecdb033d0e22df31000a8f8f1c08b3e9e8a1e7d5d0 WatchSource:0}: Error finding container 7585d99dc88ad40cca31f5ecdb033d0e22df31000a8f8f1c08b3e9e8a1e7d5d0: Status 404 returned error can't find the container with id 7585d99dc88ad40cca31f5ecdb033d0e22df31000a8f8f1c08b3e9e8a1e7d5d0 Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.888375 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc354af6-cf06-4532-83c7-845e6f8f41c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.888418 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkvmd\" (UniqueName: \"kubernetes.io/projected/fc354af6-cf06-4532-83c7-845e6f8f41c5-kube-api-access-lkvmd\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:33 crc kubenswrapper[4845]: I0202 10:51:33.985847 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1e2c-account-create-update-jxgpc"] Feb 02 10:51:33 crc kubenswrapper[4845]: W0202 10:51:33.993498 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13911fd9_043e_424e_ba84_da6af616a202.slice/crio-9493c3898fec2faa6595d94b43b866cde5372c5130166dddd2a82a77a44369f5 WatchSource:0}: Error finding container 9493c3898fec2faa6595d94b43b866cde5372c5130166dddd2a82a77a44369f5: Status 404 returned error can't find the container with id 9493c3898fec2faa6595d94b43b866cde5372c5130166dddd2a82a77a44369f5 Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.014764 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.091554 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-operator-scripts\") pod \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\" (UID: \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\") " Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.091998 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c9z2\" (UniqueName: \"kubernetes.io/projected/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-kube-api-access-5c9z2\") pod \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\" (UID: \"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc\") " Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.092421 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc" (UID: "8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.093051 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.101913 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-kube-api-access-5c9z2" (OuterVolumeSpecName: "kube-api-access-5c9z2") pod "8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc" (UID: "8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc"). InnerVolumeSpecName "kube-api-access-5c9z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.195573 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c9z2\" (UniqueName: \"kubernetes.io/projected/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc-kube-api-access-5c9z2\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.214744 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1e2c-account-create-update-jxgpc" event={"ID":"13911fd9-043e-424e-ba84-da6af616a202","Type":"ContainerStarted","Data":"3459391ca6853a62f9d027449b7b8a2cf779254301205a614a5d854038e87879"} Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.214795 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1e2c-account-create-update-jxgpc" event={"ID":"13911fd9-043e-424e-ba84-da6af616a202","Type":"ContainerStarted","Data":"9493c3898fec2faa6595d94b43b866cde5372c5130166dddd2a82a77a44369f5"} Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.217280 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-wlplx" event={"ID":"8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc","Type":"ContainerDied","Data":"2a68ae9cc8e3b067fc0a7eef64277e6ebec856643c581c42297e123e19818a08"} Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.217500 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a68ae9cc8e3b067fc0a7eef64277e6ebec856643c581c42297e123e19818a08" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.217294 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-wlplx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.232524 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" event={"ID":"fc354af6-cf06-4532-83c7-845e6f8f41c5","Type":"ContainerDied","Data":"2a5d308418e3df9d85a25d801311a5f584b57fecacd8d2dc49cfb886adff6b11"} Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.232592 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a5d308418e3df9d85a25d801311a5f584b57fecacd8d2dc49cfb886adff6b11" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.232713 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2fba-account-create-update-57wqb" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.241752 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4gj6v" event={"ID":"aa851884-d67b-4c70-8ad6-9dcf92001aa5","Type":"ContainerStarted","Data":"c7c52a689d78c280fc8b76cead8491d0d675cc8f51ac92d599f8ca929b26e719"} Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.241827 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4gj6v" event={"ID":"aa851884-d67b-4c70-8ad6-9dcf92001aa5","Type":"ContainerStarted","Data":"7585d99dc88ad40cca31f5ecdb033d0e22df31000a8f8f1c08b3e9e8a1e7d5d0"} Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.265099 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1e2c-account-create-update-jxgpc" podStartSLOduration=2.265079389 podStartE2EDuration="2.265079389s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:34.229276287 +0000 UTC m=+1175.320677737" watchObservedRunningTime="2026-02-02 10:51:34.265079389 +0000 UTC m=+1175.356480839" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.285077 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-4gj6v" podStartSLOduration=2.285056079 podStartE2EDuration="2.285056079s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:34.279291905 +0000 UTC m=+1175.370693355" watchObservedRunningTime="2026-02-02 10:51:34.285056079 +0000 UTC m=+1175.376457529" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.382518 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bbgdx"] Feb 02 10:51:34 crc kubenswrapper[4845]: E0202 10:51:34.383050 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc" containerName="mariadb-database-create" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.383075 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc" containerName="mariadb-database-create" Feb 02 10:51:34 crc kubenswrapper[4845]: E0202 10:51:34.383094 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc354af6-cf06-4532-83c7-845e6f8f41c5" containerName="mariadb-account-create-update" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.383104 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc354af6-cf06-4532-83c7-845e6f8f41c5" containerName="mariadb-account-create-update" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.383386 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc354af6-cf06-4532-83c7-845e6f8f41c5" containerName="mariadb-account-create-update" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.383414 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc" containerName="mariadb-database-create" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.384775 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.386314 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.401972 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bbgdx"] Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.454759 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gln2q"] Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.456100 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.459175 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.459354 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.459478 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507549 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-swiftconf\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507618 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvsgj\" (UniqueName: \"kubernetes.io/projected/353b5053-2393-4c95-9800-fc96032fe017-kube-api-access-dvsgj\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507646 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqv8p\" (UniqueName: \"kubernetes.io/projected/340eb818-166a-42f4-a562-8ffa18018118-kube-api-access-mqv8p\") pod \"root-account-create-update-bbgdx\" (UID: \"340eb818-166a-42f4-a562-8ffa18018118\") " pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507678 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507722 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/353b5053-2393-4c95-9800-fc96032fe017-etc-swift\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507737 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-dispersionconf\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507770 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-combined-ca-bundle\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507830 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340eb818-166a-42f4-a562-8ffa18018118-operator-scripts\") pod \"root-account-create-update-bbgdx\" (UID: \"340eb818-166a-42f4-a562-8ffa18018118\") " pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507857 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-scripts\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.507874 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-ring-data-devices\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: E0202 10:51:34.508053 4845 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 10:51:34 crc kubenswrapper[4845]: E0202 10:51:34.508068 4845 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 10:51:34 crc kubenswrapper[4845]: E0202 10:51:34.508133 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift podName:d6db6e42-984a-484b-9f90-e6efa9817f37 nodeName:}" failed. No retries permitted until 2026-02-02 10:51:38.508116748 +0000 UTC m=+1179.599518198 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift") pod "swift-storage-0" (UID: "d6db6e42-984a-484b-9f90-e6efa9817f37") : configmap "swift-ring-files" not found Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.508379 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gln2q"] Feb 02 10:51:34 crc kubenswrapper[4845]: E0202 10:51:34.517408 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dvsgj ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-gln2q" podUID="353b5053-2393-4c95-9800-fc96032fe017" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.555010 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-fwkp8"] Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.562675 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.575484 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gln2q"] Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.586649 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fwkp8"] Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.609777 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340eb818-166a-42f4-a562-8ffa18018118-operator-scripts\") pod \"root-account-create-update-bbgdx\" (UID: \"340eb818-166a-42f4-a562-8ffa18018118\") " pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.609847 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-scripts\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.609873 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-ring-data-devices\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.609931 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-swiftconf\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.609993 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvsgj\" (UniqueName: \"kubernetes.io/projected/353b5053-2393-4c95-9800-fc96032fe017-kube-api-access-dvsgj\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.610018 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqv8p\" (UniqueName: \"kubernetes.io/projected/340eb818-166a-42f4-a562-8ffa18018118-kube-api-access-mqv8p\") pod \"root-account-create-update-bbgdx\" (UID: \"340eb818-166a-42f4-a562-8ffa18018118\") " pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.610072 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/353b5053-2393-4c95-9800-fc96032fe017-etc-swift\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.610086 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-dispersionconf\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.610123 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-combined-ca-bundle\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.612348 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-scripts\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.612356 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/353b5053-2393-4c95-9800-fc96032fe017-etc-swift\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.612734 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-ring-data-devices\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.613051 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340eb818-166a-42f4-a562-8ffa18018118-operator-scripts\") pod \"root-account-create-update-bbgdx\" (UID: \"340eb818-166a-42f4-a562-8ffa18018118\") " pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.615582 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-combined-ca-bundle\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.615615 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-dispersionconf\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.615770 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-swiftconf\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.631296 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvsgj\" (UniqueName: \"kubernetes.io/projected/353b5053-2393-4c95-9800-fc96032fe017-kube-api-access-dvsgj\") pod \"swift-ring-rebalance-gln2q\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.634719 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqv8p\" (UniqueName: \"kubernetes.io/projected/340eb818-166a-42f4-a562-8ffa18018118-kube-api-access-mqv8p\") pod \"root-account-create-update-bbgdx\" (UID: \"340eb818-166a-42f4-a562-8ffa18018118\") " pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.711826 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-dispersionconf\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.711945 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-swiftconf\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.711994 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29f4\" (UniqueName: \"kubernetes.io/projected/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-kube-api-access-v29f4\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.712111 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-scripts\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.712166 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-combined-ca-bundle\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.712236 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-etc-swift\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.712310 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-ring-data-devices\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.777392 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.814715 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-scripts\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.815587 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-scripts\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.815821 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-combined-ca-bundle\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.816168 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-etc-swift\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.815878 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-etc-swift\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.816289 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-ring-data-devices\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.816843 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-ring-data-devices\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.817163 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-dispersionconf\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.817272 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-swiftconf\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.817318 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v29f4\" (UniqueName: \"kubernetes.io/projected/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-kube-api-access-v29f4\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.822827 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-swiftconf\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.822979 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-combined-ca-bundle\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.827923 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-dispersionconf\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:34 crc kubenswrapper[4845]: I0202 10:51:34.836721 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v29f4\" (UniqueName: \"kubernetes.io/projected/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-kube-api-access-v29f4\") pod \"swift-ring-rebalance-fwkp8\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.061283 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.278970 4845 generic.go:334] "Generic (PLEG): container finished" podID="aa851884-d67b-4c70-8ad6-9dcf92001aa5" containerID="c7c52a689d78c280fc8b76cead8491d0d675cc8f51ac92d599f8ca929b26e719" exitCode=0 Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.279774 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4gj6v" event={"ID":"aa851884-d67b-4c70-8ad6-9dcf92001aa5","Type":"ContainerDied","Data":"c7c52a689d78c280fc8b76cead8491d0d675cc8f51ac92d599f8ca929b26e719"} Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.289595 4845 generic.go:334] "Generic (PLEG): container finished" podID="13911fd9-043e-424e-ba84-da6af616a202" containerID="3459391ca6853a62f9d027449b7b8a2cf779254301205a614a5d854038e87879" exitCode=0 Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.289700 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.290522 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1e2c-account-create-update-jxgpc" event={"ID":"13911fd9-043e-424e-ba84-da6af616a202","Type":"ContainerDied","Data":"3459391ca6853a62f9d027449b7b8a2cf779254301205a614a5d854038e87879"} Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.302116 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:35 crc kubenswrapper[4845]: W0202 10:51:35.374893 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod340eb818_166a_42f4_a562_8ffa18018118.slice/crio-3c2857e7e1d4b7826e96833417a57da6a98eed0c53437fe71531e7ded412b853 WatchSource:0}: Error finding container 3c2857e7e1d4b7826e96833417a57da6a98eed0c53437fe71531e7ded412b853: Status 404 returned error can't find the container with id 3c2857e7e1d4b7826e96833417a57da6a98eed0c53437fe71531e7ded412b853 Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.380731 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bbgdx"] Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.430323 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-ring-data-devices\") pod \"353b5053-2393-4c95-9800-fc96032fe017\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.430405 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-combined-ca-bundle\") pod \"353b5053-2393-4c95-9800-fc96032fe017\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.430440 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-scripts\") pod \"353b5053-2393-4c95-9800-fc96032fe017\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.430473 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-dispersionconf\") pod \"353b5053-2393-4c95-9800-fc96032fe017\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.430522 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-swiftconf\") pod \"353b5053-2393-4c95-9800-fc96032fe017\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.430660 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/353b5053-2393-4c95-9800-fc96032fe017-etc-swift\") pod \"353b5053-2393-4c95-9800-fc96032fe017\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.430722 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvsgj\" (UniqueName: \"kubernetes.io/projected/353b5053-2393-4c95-9800-fc96032fe017-kube-api-access-dvsgj\") pod \"353b5053-2393-4c95-9800-fc96032fe017\" (UID: \"353b5053-2393-4c95-9800-fc96032fe017\") " Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.430836 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "353b5053-2393-4c95-9800-fc96032fe017" (UID: "353b5053-2393-4c95-9800-fc96032fe017"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.431425 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-scripts" (OuterVolumeSpecName: "scripts") pod "353b5053-2393-4c95-9800-fc96032fe017" (UID: "353b5053-2393-4c95-9800-fc96032fe017"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.431764 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/353b5053-2393-4c95-9800-fc96032fe017-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "353b5053-2393-4c95-9800-fc96032fe017" (UID: "353b5053-2393-4c95-9800-fc96032fe017"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.432866 4845 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.432906 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b5053-2393-4c95-9800-fc96032fe017-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.432918 4845 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/353b5053-2393-4c95-9800-fc96032fe017-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.437012 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "353b5053-2393-4c95-9800-fc96032fe017" (UID: "353b5053-2393-4c95-9800-fc96032fe017"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.437537 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353b5053-2393-4c95-9800-fc96032fe017-kube-api-access-dvsgj" (OuterVolumeSpecName: "kube-api-access-dvsgj") pod "353b5053-2393-4c95-9800-fc96032fe017" (UID: "353b5053-2393-4c95-9800-fc96032fe017"). InnerVolumeSpecName "kube-api-access-dvsgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.437776 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "353b5053-2393-4c95-9800-fc96032fe017" (UID: "353b5053-2393-4c95-9800-fc96032fe017"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.438757 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "353b5053-2393-4c95-9800-fc96032fe017" (UID: "353b5053-2393-4c95-9800-fc96032fe017"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.536214 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvsgj\" (UniqueName: \"kubernetes.io/projected/353b5053-2393-4c95-9800-fc96032fe017-kube-api-access-dvsgj\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.536243 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.536254 4845 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.536264 4845 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/353b5053-2393-4c95-9800-fc96032fe017-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.778638 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fwkp8"] Feb 02 10:51:35 crc kubenswrapper[4845]: I0202 10:51:35.849764 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-58d87f97d7-w9v5x" podUID="c26d4007-db0b-4379-8431-d6e43dec7e9f" containerName="console" containerID="cri-o://111832ec0e0c78c364956282064b05b1c4c2ce296de61e9f1fa4fa6702a3f91d" gracePeriod=15 Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.301198 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d87f97d7-w9v5x_c26d4007-db0b-4379-8431-d6e43dec7e9f/console/0.log" Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.301243 4845 generic.go:334] "Generic (PLEG): container finished" podID="c26d4007-db0b-4379-8431-d6e43dec7e9f" containerID="111832ec0e0c78c364956282064b05b1c4c2ce296de61e9f1fa4fa6702a3f91d" exitCode=2 Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.301331 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d87f97d7-w9v5x" event={"ID":"c26d4007-db0b-4379-8431-d6e43dec7e9f","Type":"ContainerDied","Data":"111832ec0e0c78c364956282064b05b1c4c2ce296de61e9f1fa4fa6702a3f91d"} Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.304544 4845 generic.go:334] "Generic (PLEG): container finished" podID="340eb818-166a-42f4-a562-8ffa18018118" containerID="9249fd2f5427ff24d36b33c0b16d83ec165b6227454b663781f59d842989c2da" exitCode=0 Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.304617 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bbgdx" event={"ID":"340eb818-166a-42f4-a562-8ffa18018118","Type":"ContainerDied","Data":"9249fd2f5427ff24d36b33c0b16d83ec165b6227454b663781f59d842989c2da"} Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.304650 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bbgdx" event={"ID":"340eb818-166a-42f4-a562-8ffa18018118","Type":"ContainerStarted","Data":"3c2857e7e1d4b7826e96833417a57da6a98eed0c53437fe71531e7ded412b853"} Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.304670 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gln2q" Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.419737 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gln2q"] Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.444065 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-gln2q"] Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.636510 4845 patch_prober.go:28] interesting pod/console-58d87f97d7-w9v5x container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/health\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Feb 02 10:51:36 crc kubenswrapper[4845]: I0202 10:51:36.636814 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-58d87f97d7-w9v5x" podUID="c26d4007-db0b-4379-8431-d6e43dec7e9f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.88:8443/health\": dial tcp 10.217.0.88:8443: connect: connection refused" Feb 02 10:51:37 crc kubenswrapper[4845]: I0202 10:51:37.315814 4845 generic.go:334] "Generic (PLEG): container finished" podID="2e45ad6a-20f4-4da2-82b7-500ed29a0cd5" containerID="b3ca7a9a30c5133d142f8240cf49d184ac360dc0f74e089c87c96d3a92c7d96d" exitCode=0 Feb 02 10:51:37 crc kubenswrapper[4845]: I0202 10:51:37.315870 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5","Type":"ContainerDied","Data":"b3ca7a9a30c5133d142f8240cf49d184ac360dc0f74e089c87c96d3a92c7d96d"} Feb 02 10:51:37 crc kubenswrapper[4845]: I0202 10:51:37.318357 4845 generic.go:334] "Generic (PLEG): container finished" podID="70739f91-4fde-4bc2-b4e1-5bdb7cb0426c" containerID="16a87dbd784c474f3d4b29bfb2f9739515e8d502ec347cc5ddd63af0721bc8af" exitCode=0 Feb 02 10:51:37 crc kubenswrapper[4845]: I0202 10:51:37.318396 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c","Type":"ContainerDied","Data":"16a87dbd784c474f3d4b29bfb2f9739515e8d502ec347cc5ddd63af0721bc8af"} Feb 02 10:51:37 crc kubenswrapper[4845]: I0202 10:51:37.321972 4845 generic.go:334] "Generic (PLEG): container finished" podID="a61fa08e-868a-4415-88d5-7ed0eebbeb45" containerID="687c373278d72892bc7af53f43d957e03afb3a8da90e855595ea0df53482a4a4" exitCode=0 Feb 02 10:51:37 crc kubenswrapper[4845]: I0202 10:51:37.322092 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a61fa08e-868a-4415-88d5-7ed0eebbeb45","Type":"ContainerDied","Data":"687c373278d72892bc7af53f43d957e03afb3a8da90e855595ea0df53482a4a4"} Feb 02 10:51:37 crc kubenswrapper[4845]: I0202 10:51:37.726862 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353b5053-2393-4c95-9800-fc96032fe017" path="/var/lib/kubelet/pods/353b5053-2393-4c95-9800-fc96032fe017/volumes" Feb 02 10:51:38 crc kubenswrapper[4845]: W0202 10:51:38.316360 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacbaf357_af6c_46b6_b6f0_de2b6e4ee44c.slice/crio-b3336106af570a904c4c4f983f9ee425832488db515a8996781482a3bc1d2732 WatchSource:0}: Error finding container b3336106af570a904c4c4f983f9ee425832488db515a8996781482a3bc1d2732: Status 404 returned error can't find the container with id b3336106af570a904c4c4f983f9ee425832488db515a8996781482a3bc1d2732 Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.337154 4845 generic.go:334] "Generic (PLEG): container finished" podID="d0a3a285-364a-4df2-8a7c-947ff673f254" containerID="93c6ac3518c8da645fe78eb56c15a31adc12e1d3538e14b7359e676cb11918c9" exitCode=0 Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.337218 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d0a3a285-364a-4df2-8a7c-947ff673f254","Type":"ContainerDied","Data":"93c6ac3518c8da645fe78eb56c15a31adc12e1d3538e14b7359e676cb11918c9"} Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.340977 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bbgdx" event={"ID":"340eb818-166a-42f4-a562-8ffa18018118","Type":"ContainerDied","Data":"3c2857e7e1d4b7826e96833417a57da6a98eed0c53437fe71531e7ded412b853"} Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.341006 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c2857e7e1d4b7826e96833417a57da6a98eed0c53437fe71531e7ded412b853" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.342379 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4gj6v" event={"ID":"aa851884-d67b-4c70-8ad6-9dcf92001aa5","Type":"ContainerDied","Data":"7585d99dc88ad40cca31f5ecdb033d0e22df31000a8f8f1c08b3e9e8a1e7d5d0"} Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.342398 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7585d99dc88ad40cca31f5ecdb033d0e22df31000a8f8f1c08b3e9e8a1e7d5d0" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.344764 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1e2c-account-create-update-jxgpc" event={"ID":"13911fd9-043e-424e-ba84-da6af616a202","Type":"ContainerDied","Data":"9493c3898fec2faa6595d94b43b866cde5372c5130166dddd2a82a77a44369f5"} Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.344909 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9493c3898fec2faa6595d94b43b866cde5372c5130166dddd2a82a77a44369f5" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.345904 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fwkp8" event={"ID":"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c","Type":"ContainerStarted","Data":"b3336106af570a904c4c4f983f9ee425832488db515a8996781482a3bc1d2732"} Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.549388 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:38 crc kubenswrapper[4845]: E0202 10:51:38.549926 4845 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 10:51:38 crc kubenswrapper[4845]: E0202 10:51:38.550001 4845 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 10:51:38 crc kubenswrapper[4845]: E0202 10:51:38.550094 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift podName:d6db6e42-984a-484b-9f90-e6efa9817f37 nodeName:}" failed. No retries permitted until 2026-02-02 10:51:46.550071265 +0000 UTC m=+1187.641472715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift") pod "swift-storage-0" (UID: "d6db6e42-984a-484b-9f90-e6efa9817f37") : configmap "swift-ring-files" not found Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.611351 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.680342 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.685224 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.756692 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqv8p\" (UniqueName: \"kubernetes.io/projected/340eb818-166a-42f4-a562-8ffa18018118-kube-api-access-mqv8p\") pod \"340eb818-166a-42f4-a562-8ffa18018118\" (UID: \"340eb818-166a-42f4-a562-8ffa18018118\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.756837 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvxqp\" (UniqueName: \"kubernetes.io/projected/aa851884-d67b-4c70-8ad6-9dcf92001aa5-kube-api-access-zvxqp\") pod \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\" (UID: \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.756905 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340eb818-166a-42f4-a562-8ffa18018118-operator-scripts\") pod \"340eb818-166a-42f4-a562-8ffa18018118\" (UID: \"340eb818-166a-42f4-a562-8ffa18018118\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.757015 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwsf\" (UniqueName: \"kubernetes.io/projected/13911fd9-043e-424e-ba84-da6af616a202-kube-api-access-7rwsf\") pod \"13911fd9-043e-424e-ba84-da6af616a202\" (UID: \"13911fd9-043e-424e-ba84-da6af616a202\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.757120 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13911fd9-043e-424e-ba84-da6af616a202-operator-scripts\") pod \"13911fd9-043e-424e-ba84-da6af616a202\" (UID: \"13911fd9-043e-424e-ba84-da6af616a202\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.757187 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa851884-d67b-4c70-8ad6-9dcf92001aa5-operator-scripts\") pod \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\" (UID: \"aa851884-d67b-4c70-8ad6-9dcf92001aa5\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.764352 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa851884-d67b-4c70-8ad6-9dcf92001aa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa851884-d67b-4c70-8ad6-9dcf92001aa5" (UID: "aa851884-d67b-4c70-8ad6-9dcf92001aa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.765327 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340eb818-166a-42f4-a562-8ffa18018118-kube-api-access-mqv8p" (OuterVolumeSpecName: "kube-api-access-mqv8p") pod "340eb818-166a-42f4-a562-8ffa18018118" (UID: "340eb818-166a-42f4-a562-8ffa18018118"). InnerVolumeSpecName "kube-api-access-mqv8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.765991 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13911fd9-043e-424e-ba84-da6af616a202-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13911fd9-043e-424e-ba84-da6af616a202" (UID: "13911fd9-043e-424e-ba84-da6af616a202"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.766175 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/340eb818-166a-42f4-a562-8ffa18018118-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "340eb818-166a-42f4-a562-8ffa18018118" (UID: "340eb818-166a-42f4-a562-8ffa18018118"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.774207 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa851884-d67b-4c70-8ad6-9dcf92001aa5-kube-api-access-zvxqp" (OuterVolumeSpecName: "kube-api-access-zvxqp") pod "aa851884-d67b-4c70-8ad6-9dcf92001aa5" (UID: "aa851884-d67b-4c70-8ad6-9dcf92001aa5"). InnerVolumeSpecName "kube-api-access-zvxqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.802593 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13911fd9-043e-424e-ba84-da6af616a202-kube-api-access-7rwsf" (OuterVolumeSpecName: "kube-api-access-7rwsf") pod "13911fd9-043e-424e-ba84-da6af616a202" (UID: "13911fd9-043e-424e-ba84-da6af616a202"). InnerVolumeSpecName "kube-api-access-7rwsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.873370 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqv8p\" (UniqueName: \"kubernetes.io/projected/340eb818-166a-42f4-a562-8ffa18018118-kube-api-access-mqv8p\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.873401 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvxqp\" (UniqueName: \"kubernetes.io/projected/aa851884-d67b-4c70-8ad6-9dcf92001aa5-kube-api-access-zvxqp\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.873410 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/340eb818-166a-42f4-a562-8ffa18018118-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.873420 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rwsf\" (UniqueName: \"kubernetes.io/projected/13911fd9-043e-424e-ba84-da6af616a202-kube-api-access-7rwsf\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.873428 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13911fd9-043e-424e-ba84-da6af616a202-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.873437 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa851884-d67b-4c70-8ad6-9dcf92001aa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.883962 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d87f97d7-w9v5x_c26d4007-db0b-4379-8431-d6e43dec7e9f/console/0.log" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.884048 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.975310 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-service-ca\") pod \"c26d4007-db0b-4379-8431-d6e43dec7e9f\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.975592 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-oauth-config\") pod \"c26d4007-db0b-4379-8431-d6e43dec7e9f\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.975714 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-serving-cert\") pod \"c26d4007-db0b-4379-8431-d6e43dec7e9f\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.975851 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-config\") pod \"c26d4007-db0b-4379-8431-d6e43dec7e9f\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.975971 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp6kt\" (UniqueName: \"kubernetes.io/projected/c26d4007-db0b-4379-8431-d6e43dec7e9f-kube-api-access-fp6kt\") pod \"c26d4007-db0b-4379-8431-d6e43dec7e9f\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.976963 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-trusted-ca-bundle\") pod \"c26d4007-db0b-4379-8431-d6e43dec7e9f\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.977113 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-oauth-serving-cert\") pod \"c26d4007-db0b-4379-8431-d6e43dec7e9f\" (UID: \"c26d4007-db0b-4379-8431-d6e43dec7e9f\") " Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.978681 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c26d4007-db0b-4379-8431-d6e43dec7e9f" (UID: "c26d4007-db0b-4379-8431-d6e43dec7e9f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.979244 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-service-ca" (OuterVolumeSpecName: "service-ca") pod "c26d4007-db0b-4379-8431-d6e43dec7e9f" (UID: "c26d4007-db0b-4379-8431-d6e43dec7e9f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.979562 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-config" (OuterVolumeSpecName: "console-config") pod "c26d4007-db0b-4379-8431-d6e43dec7e9f" (UID: "c26d4007-db0b-4379-8431-d6e43dec7e9f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.984314 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c26d4007-db0b-4379-8431-d6e43dec7e9f" (UID: "c26d4007-db0b-4379-8431-d6e43dec7e9f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.989709 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c26d4007-db0b-4379-8431-d6e43dec7e9f" (UID: "c26d4007-db0b-4379-8431-d6e43dec7e9f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.991051 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26d4007-db0b-4379-8431-d6e43dec7e9f-kube-api-access-fp6kt" (OuterVolumeSpecName: "kube-api-access-fp6kt") pod "c26d4007-db0b-4379-8431-d6e43dec7e9f" (UID: "c26d4007-db0b-4379-8431-d6e43dec7e9f"). InnerVolumeSpecName "kube-api-access-fp6kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:38 crc kubenswrapper[4845]: I0202 10:51:38.991206 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c26d4007-db0b-4379-8431-d6e43dec7e9f" (UID: "c26d4007-db0b-4379-8431-d6e43dec7e9f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.080688 4845 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.081077 4845 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.081092 4845 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.081106 4845 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.081120 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp6kt\" (UniqueName: \"kubernetes.io/projected/c26d4007-db0b-4379-8431-d6e43dec7e9f-kube-api-access-fp6kt\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.081133 4845 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.081145 4845 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c26d4007-db0b-4379-8431-d6e43dec7e9f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.358278 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerStarted","Data":"953536ca819321a406c08f163b4b6ff072898362e02b3adbe579d686cddbce8e"} Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.365415 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d0a3a285-364a-4df2-8a7c-947ff673f254","Type":"ContainerStarted","Data":"17bfdabfa77ecf7194164037270605b523d4995460e6380d856305c1f6c0057d"} Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.365675 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.369604 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"a61fa08e-868a-4415-88d5-7ed0eebbeb45","Type":"ContainerStarted","Data":"1350f0f44f8691a3e3bbd0753dc0b0d45e8adf60a35bc0be667a6517ed1450e4"} Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.370588 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.384509 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2e45ad6a-20f4-4da2-82b7-500ed29a0cd5","Type":"ContainerStarted","Data":"df55539d2d0f8d9c780d83c6d45d1c47cd8164e3c370d8ffca6b1ef3a6cabb0b"} Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.384961 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.388485 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d87f97d7-w9v5x_c26d4007-db0b-4379-8431-d6e43dec7e9f/console/0.log" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.388609 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d87f97d7-w9v5x" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.388601 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d87f97d7-w9v5x" event={"ID":"c26d4007-db0b-4379-8431-d6e43dec7e9f","Type":"ContainerDied","Data":"b03473270e278212cfa587bd83c780a50bc52c823bf36377b8f5efc441c8224f"} Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.388792 4845 scope.go:117] "RemoveContainer" containerID="111832ec0e0c78c364956282064b05b1c4c2ce296de61e9f1fa4fa6702a3f91d" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.395115 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bbgdx" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.395171 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"70739f91-4fde-4bc2-b4e1-5bdb7cb0426c","Type":"ContainerStarted","Data":"091da937f8a68216ad0de07a10e107852949dbe38d8a323313bf100aa1da6145"} Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.395421 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4gj6v" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.395673 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1e2c-account-create-update-jxgpc" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.423879 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=39.302130834 podStartE2EDuration="57.423851914s" podCreationTimestamp="2026-02-02 10:50:42 +0000 UTC" firstStartedPulling="2026-02-02 10:50:45.151792619 +0000 UTC m=+1126.243194069" lastFinishedPulling="2026-02-02 10:51:03.273513699 +0000 UTC m=+1144.364915149" observedRunningTime="2026-02-02 10:51:39.393456086 +0000 UTC m=+1180.484857536" watchObservedRunningTime="2026-02-02 10:51:39.423851914 +0000 UTC m=+1180.515253364" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.456279 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=39.228180102 podStartE2EDuration="57.456261769s" podCreationTimestamp="2026-02-02 10:50:42 +0000 UTC" firstStartedPulling="2026-02-02 10:50:45.015977969 +0000 UTC m=+1126.107379409" lastFinishedPulling="2026-02-02 10:51:03.244059626 +0000 UTC m=+1144.335461076" observedRunningTime="2026-02-02 10:51:39.43631688 +0000 UTC m=+1180.527718350" watchObservedRunningTime="2026-02-02 10:51:39.456261769 +0000 UTC m=+1180.547663219" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.481306 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.21157542 podStartE2EDuration="57.481285924s" podCreationTimestamp="2026-02-02 10:50:42 +0000 UTC" firstStartedPulling="2026-02-02 10:50:44.960470659 +0000 UTC m=+1126.051872109" lastFinishedPulling="2026-02-02 10:51:03.230181163 +0000 UTC m=+1144.321582613" observedRunningTime="2026-02-02 10:51:39.474976603 +0000 UTC m=+1180.566378053" watchObservedRunningTime="2026-02-02 10:51:39.481285924 +0000 UTC m=+1180.572687374" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.540288 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58d87f97d7-w9v5x"] Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.547820 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-58d87f97d7-w9v5x"] Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.569832 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.592196747 podStartE2EDuration="56.569813911s" podCreationTimestamp="2026-02-02 10:50:43 +0000 UTC" firstStartedPulling="2026-02-02 10:50:45.26638873 +0000 UTC m=+1126.357790180" lastFinishedPulling="2026-02-02 10:51:03.244005894 +0000 UTC m=+1144.335407344" observedRunningTime="2026-02-02 10:51:39.567085573 +0000 UTC m=+1180.658487023" watchObservedRunningTime="2026-02-02 10:51:39.569813911 +0000 UTC m=+1180.661215361" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.742079 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26d4007-db0b-4379-8431-d6e43dec7e9f" path="/var/lib/kubelet/pods/c26d4007-db0b-4379-8431-d6e43dec7e9f/volumes" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.770083 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.840129 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-z27qx"] Feb 02 10:51:39 crc kubenswrapper[4845]: E0202 10:51:39.840753 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26d4007-db0b-4379-8431-d6e43dec7e9f" containerName="console" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.840779 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26d4007-db0b-4379-8431-d6e43dec7e9f" containerName="console" Feb 02 10:51:39 crc kubenswrapper[4845]: E0202 10:51:39.840797 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa851884-d67b-4c70-8ad6-9dcf92001aa5" containerName="mariadb-database-create" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.840804 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa851884-d67b-4c70-8ad6-9dcf92001aa5" containerName="mariadb-database-create" Feb 02 10:51:39 crc kubenswrapper[4845]: E0202 10:51:39.840822 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340eb818-166a-42f4-a562-8ffa18018118" containerName="mariadb-account-create-update" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.840828 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="340eb818-166a-42f4-a562-8ffa18018118" containerName="mariadb-account-create-update" Feb 02 10:51:39 crc kubenswrapper[4845]: E0202 10:51:39.840843 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13911fd9-043e-424e-ba84-da6af616a202" containerName="mariadb-account-create-update" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.840857 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="13911fd9-043e-424e-ba84-da6af616a202" containerName="mariadb-account-create-update" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.841056 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa851884-d67b-4c70-8ad6-9dcf92001aa5" containerName="mariadb-database-create" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.841071 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26d4007-db0b-4379-8431-d6e43dec7e9f" containerName="console" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.841085 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="13911fd9-043e-424e-ba84-da6af616a202" containerName="mariadb-account-create-update" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.841096 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="340eb818-166a-42f4-a562-8ffa18018118" containerName="mariadb-account-create-update" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.841757 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.892247 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w59pq"] Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.892471 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" podUID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" containerName="dnsmasq-dns" containerID="cri-o://81b2eedcdbc73132319f670807bdc308fb485bc4c468651b7572510bbe0cf821" gracePeriod=10 Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.896590 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712e6155-a77e-4f9c-9d55-a6edab62e9a7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-z27qx\" (UID: \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.896915 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftsvx\" (UniqueName: \"kubernetes.io/projected/712e6155-a77e-4f9c-9d55-a6edab62e9a7-kube-api-access-ftsvx\") pod \"mysqld-exporter-openstack-cell1-db-create-z27qx\" (UID: \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.906402 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-z27qx"] Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.999014 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712e6155-a77e-4f9c-9d55-a6edab62e9a7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-z27qx\" (UID: \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:39 crc kubenswrapper[4845]: I0202 10:51:39.999113 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftsvx\" (UniqueName: \"kubernetes.io/projected/712e6155-a77e-4f9c-9d55-a6edab62e9a7-kube-api-access-ftsvx\") pod \"mysqld-exporter-openstack-cell1-db-create-z27qx\" (UID: \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.000162 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712e6155-a77e-4f9c-9d55-a6edab62e9a7-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-z27qx\" (UID: \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.025083 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftsvx\" (UniqueName: \"kubernetes.io/projected/712e6155-a77e-4f9c-9d55-a6edab62e9a7-kube-api-access-ftsvx\") pod \"mysqld-exporter-openstack-cell1-db-create-z27qx\" (UID: \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.150768 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-c783-account-create-update-c8k62"] Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.152635 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.160242 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.162769 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.173755 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c783-account-create-update-c8k62"] Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.210221 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85r5h\" (UniqueName: \"kubernetes.io/projected/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-kube-api-access-85r5h\") pod \"mysqld-exporter-c783-account-create-update-c8k62\" (UID: \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\") " pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.210405 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-operator-scripts\") pod \"mysqld-exporter-c783-account-create-update-c8k62\" (UID: \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\") " pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.312380 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-operator-scripts\") pod \"mysqld-exporter-c783-account-create-update-c8k62\" (UID: \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\") " pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.312795 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85r5h\" (UniqueName: \"kubernetes.io/projected/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-kube-api-access-85r5h\") pod \"mysqld-exporter-c783-account-create-update-c8k62\" (UID: \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\") " pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.313968 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-operator-scripts\") pod \"mysqld-exporter-c783-account-create-update-c8k62\" (UID: \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\") " pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.359192 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85r5h\" (UniqueName: \"kubernetes.io/projected/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-kube-api-access-85r5h\") pod \"mysqld-exporter-c783-account-create-update-c8k62\" (UID: \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\") " pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.425901 4845 generic.go:334] "Generic (PLEG): container finished" podID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" containerID="81b2eedcdbc73132319f670807bdc308fb485bc4c468651b7572510bbe0cf821" exitCode=0 Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.426897 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" event={"ID":"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e","Type":"ContainerDied","Data":"81b2eedcdbc73132319f670807bdc308fb485bc4c468651b7572510bbe0cf821"} Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.486482 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.899098 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-z27qx"] Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.913064 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bbgdx"] Feb 02 10:51:40 crc kubenswrapper[4845]: I0202 10:51:40.926773 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bbgdx"] Feb 02 10:51:41 crc kubenswrapper[4845]: I0202 10:51:41.738642 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340eb818-166a-42f4-a562-8ffa18018118" path="/var/lib/kubelet/pods/340eb818-166a-42f4-a562-8ffa18018118/volumes" Feb 02 10:51:41 crc kubenswrapper[4845]: I0202 10:51:41.892159 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.051427 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-config\") pod \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.051661 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq6c6\" (UniqueName: \"kubernetes.io/projected/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-kube-api-access-gq6c6\") pod \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.051693 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-dns-svc\") pod \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\" (UID: \"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e\") " Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.079149 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-kube-api-access-gq6c6" (OuterVolumeSpecName: "kube-api-access-gq6c6") pod "7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" (UID: "7d63dd57-08d9-4913-b1d3-36a9c8b5db2e"). InnerVolumeSpecName "kube-api-access-gq6c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.126668 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-config" (OuterVolumeSpecName: "config") pod "7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" (UID: "7d63dd57-08d9-4913-b1d3-36a9c8b5db2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.159194 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq6c6\" (UniqueName: \"kubernetes.io/projected/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-kube-api-access-gq6c6\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.159243 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.191774 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" (UID: "7d63dd57-08d9-4913-b1d3-36a9c8b5db2e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.261628 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.451058 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" event={"ID":"7d63dd57-08d9-4913-b1d3-36a9c8b5db2e","Type":"ContainerDied","Data":"7586b993b7c8e47124f3e9c3b1a81c730d51fb7fd5b1683f7e203dc16fdb11b3"} Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.451596 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w59pq" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.452007 4845 scope.go:117] "RemoveContainer" containerID="81b2eedcdbc73132319f670807bdc308fb485bc4c468651b7572510bbe0cf821" Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.454918 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerStarted","Data":"a90515d7d5931387eadbfcaa9c4213064cf00c7a915781e7cf67147b71f9ba30"} Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.488944 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w59pq"] Feb 02 10:51:42 crc kubenswrapper[4845]: I0202 10:51:42.499327 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w59pq"] Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.040561 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kgn95"] Feb 02 10:51:43 crc kubenswrapper[4845]: E0202 10:51:43.041063 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" containerName="dnsmasq-dns" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.041081 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" containerName="dnsmasq-dns" Feb 02 10:51:43 crc kubenswrapper[4845]: E0202 10:51:43.041126 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" containerName="init" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.041135 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" containerName="init" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.041396 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" containerName="dnsmasq-dns" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.042257 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.046429 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.046684 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-snsd2" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.052066 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kgn95"] Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.180116 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-db-sync-config-data\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.180391 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-combined-ca-bundle\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.180501 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkc25\" (UniqueName: \"kubernetes.io/projected/34877df4-b654-4e0c-ac67-da6fd95c249d-kube-api-access-rkc25\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.180599 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-config-data\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.283004 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-db-sync-config-data\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.283139 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-combined-ca-bundle\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.283170 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkc25\" (UniqueName: \"kubernetes.io/projected/34877df4-b654-4e0c-ac67-da6fd95c249d-kube-api-access-rkc25\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.283207 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-config-data\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.301828 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-config-data\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.301855 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-combined-ca-bundle\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.303255 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-db-sync-config-data\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.309454 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkc25\" (UniqueName: \"kubernetes.io/projected/34877df4-b654-4e0c-ac67-da6fd95c249d-kube-api-access-rkc25\") pod \"glance-db-sync-kgn95\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.353860 4845 scope.go:117] "RemoveContainer" containerID="932d8191c1cbfeca1c9bdbbf6bdc47b46318b7170046d467a566c55700027df8" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.368981 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kgn95" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.468573 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" event={"ID":"712e6155-a77e-4f9c-9d55-a6edab62e9a7","Type":"ContainerStarted","Data":"46c4e9ee220ea6b984a975979b0bcc36085734f0da5fd8ac38f3c9ab84f3d5ad"} Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.791356 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d63dd57-08d9-4913-b1d3-36a9c8b5db2e" path="/var/lib/kubelet/pods/7d63dd57-08d9-4913-b1d3-36a9c8b5db2e/volumes" Feb 02 10:51:43 crc kubenswrapper[4845]: I0202 10:51:43.925443 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c783-account-create-update-c8k62"] Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.085743 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kgn95"] Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.481244 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fwkp8" event={"ID":"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c","Type":"ContainerStarted","Data":"de36731024371d8d225acca93d7d017177a15b01d17e1764ca4da25552e5472e"} Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.482642 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" event={"ID":"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd","Type":"ContainerStarted","Data":"955f18c841a8c16fdc8a34bb8452d5379275418e606966b811bfd890a335150b"} Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.487694 4845 generic.go:334] "Generic (PLEG): container finished" podID="712e6155-a77e-4f9c-9d55-a6edab62e9a7" containerID="e2da395f32221226555ec9a36b4c70b9ebd84972cf5fd496af49cd196e7172a2" exitCode=0 Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.487767 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" event={"ID":"712e6155-a77e-4f9c-9d55-a6edab62e9a7","Type":"ContainerDied","Data":"e2da395f32221226555ec9a36b4c70b9ebd84972cf5fd496af49cd196e7172a2"} Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.489079 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kgn95" event={"ID":"34877df4-b654-4e0c-ac67-da6fd95c249d","Type":"ContainerStarted","Data":"46b3e4f9d3e3e603b74f4066b84d76558af5b054c23ef7c7b9de906dee295d9c"} Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.510045 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-fwkp8" podStartSLOduration=5.405812998 podStartE2EDuration="10.510022864s" podCreationTimestamp="2026-02-02 10:51:34 +0000 UTC" firstStartedPulling="2026-02-02 10:51:38.324753422 +0000 UTC m=+1179.416154872" lastFinishedPulling="2026-02-02 10:51:43.428963288 +0000 UTC m=+1184.520364738" observedRunningTime="2026-02-02 10:51:44.499172974 +0000 UTC m=+1185.590574424" watchObservedRunningTime="2026-02-02 10:51:44.510022864 +0000 UTC m=+1185.601424314" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.528575 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gxb44"] Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.530094 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.531834 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.544221 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gxb44"] Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.612863 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.617115 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-operator-scripts\") pod \"root-account-create-update-gxb44\" (UID: \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\") " pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.617214 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7f7p\" (UniqueName: \"kubernetes.io/projected/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-kube-api-access-w7f7p\") pod \"root-account-create-update-gxb44\" (UID: \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\") " pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.719295 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-operator-scripts\") pod \"root-account-create-update-gxb44\" (UID: \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\") " pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.719419 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7f7p\" (UniqueName: \"kubernetes.io/projected/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-kube-api-access-w7f7p\") pod \"root-account-create-update-gxb44\" (UID: \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\") " pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.720106 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-operator-scripts\") pod \"root-account-create-update-gxb44\" (UID: \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\") " pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.737199 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7f7p\" (UniqueName: \"kubernetes.io/projected/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-kube-api-access-w7f7p\") pod \"root-account-create-update-gxb44\" (UID: \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\") " pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:44 crc kubenswrapper[4845]: I0202 10:51:44.880566 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:45 crc kubenswrapper[4845]: I0202 10:51:45.177751 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 10:51:45 crc kubenswrapper[4845]: I0202 10:51:45.457524 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gxb44"] Feb 02 10:51:45 crc kubenswrapper[4845]: I0202 10:51:45.515901 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" event={"ID":"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd","Type":"ContainerStarted","Data":"cf759ca4ae8492477c32d3ee2af20b6a746da7ed09b5646264fc1f94d46044d6"} Feb 02 10:51:45 crc kubenswrapper[4845]: I0202 10:51:45.549204 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" podStartSLOduration=5.549182215 podStartE2EDuration="5.549182215s" podCreationTimestamp="2026-02-02 10:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:45.535372111 +0000 UTC m=+1186.626773561" watchObservedRunningTime="2026-02-02 10:51:45.549182215 +0000 UTC m=+1186.640583655" Feb 02 10:51:46 crc kubenswrapper[4845]: I0202 10:51:46.534253 4845 generic.go:334] "Generic (PLEG): container finished" podID="e0fdfb88-9683-4cc2-95f1-6ab55c558dfd" containerID="cf759ca4ae8492477c32d3ee2af20b6a746da7ed09b5646264fc1f94d46044d6" exitCode=0 Feb 02 10:51:46 crc kubenswrapper[4845]: I0202 10:51:46.534474 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" event={"ID":"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd","Type":"ContainerDied","Data":"cf759ca4ae8492477c32d3ee2af20b6a746da7ed09b5646264fc1f94d46044d6"} Feb 02 10:51:46 crc kubenswrapper[4845]: I0202 10:51:46.555576 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:51:46 crc kubenswrapper[4845]: E0202 10:51:46.555813 4845 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 10:51:46 crc kubenswrapper[4845]: E0202 10:51:46.555832 4845 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 10:51:46 crc kubenswrapper[4845]: E0202 10:51:46.555901 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift podName:d6db6e42-984a-484b-9f90-e6efa9817f37 nodeName:}" failed. No retries permitted until 2026-02-02 10:52:02.555862778 +0000 UTC m=+1203.647264228 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift") pod "swift-storage-0" (UID: "d6db6e42-984a-484b-9f90-e6efa9817f37") : configmap "swift-ring-files" not found Feb 02 10:51:47 crc kubenswrapper[4845]: W0202 10:51:47.285741 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8167c688_97fc_4a4b_9f1f_b0b037c86a9a.slice/crio-ce34701895725b3d5a3763408473ead104dae5a66310eff73d3f6c184da68a43 WatchSource:0}: Error finding container ce34701895725b3d5a3763408473ead104dae5a66310eff73d3f6c184da68a43: Status 404 returned error can't find the container with id ce34701895725b3d5a3763408473ead104dae5a66310eff73d3f6c184da68a43 Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.520695 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.579055 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftsvx\" (UniqueName: \"kubernetes.io/projected/712e6155-a77e-4f9c-9d55-a6edab62e9a7-kube-api-access-ftsvx\") pod \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\" (UID: \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\") " Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.579186 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712e6155-a77e-4f9c-9d55-a6edab62e9a7-operator-scripts\") pod \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\" (UID: \"712e6155-a77e-4f9c-9d55-a6edab62e9a7\") " Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.580969 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/712e6155-a77e-4f9c-9d55-a6edab62e9a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "712e6155-a77e-4f9c-9d55-a6edab62e9a7" (UID: "712e6155-a77e-4f9c-9d55-a6edab62e9a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.584567 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gxb44" event={"ID":"8167c688-97fc-4a4b-9f1f-b0b037c86a9a","Type":"ContainerStarted","Data":"ce34701895725b3d5a3763408473ead104dae5a66310eff73d3f6c184da68a43"} Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.593930 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712e6155-a77e-4f9c-9d55-a6edab62e9a7-kube-api-access-ftsvx" (OuterVolumeSpecName: "kube-api-access-ftsvx") pod "712e6155-a77e-4f9c-9d55-a6edab62e9a7" (UID: "712e6155-a77e-4f9c-9d55-a6edab62e9a7"). InnerVolumeSpecName "kube-api-access-ftsvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.602442 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.622521 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-z27qx" event={"ID":"712e6155-a77e-4f9c-9d55-a6edab62e9a7","Type":"ContainerDied","Data":"46c4e9ee220ea6b984a975979b0bcc36085734f0da5fd8ac38f3c9ab84f3d5ad"} Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.622567 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46c4e9ee220ea6b984a975979b0bcc36085734f0da5fd8ac38f3c9ab84f3d5ad" Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.682200 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftsvx\" (UniqueName: \"kubernetes.io/projected/712e6155-a77e-4f9c-9d55-a6edab62e9a7-kube-api-access-ftsvx\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:47 crc kubenswrapper[4845]: I0202 10:51:47.682241 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712e6155-a77e-4f9c-9d55-a6edab62e9a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.028482 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.087845 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85r5h\" (UniqueName: \"kubernetes.io/projected/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-kube-api-access-85r5h\") pod \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\" (UID: \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\") " Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.088137 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-operator-scripts\") pod \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\" (UID: \"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd\") " Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.089025 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0fdfb88-9683-4cc2-95f1-6ab55c558dfd" (UID: "e0fdfb88-9683-4cc2-95f1-6ab55c558dfd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.094328 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-kube-api-access-85r5h" (OuterVolumeSpecName: "kube-api-access-85r5h") pod "e0fdfb88-9683-4cc2-95f1-6ab55c558dfd" (UID: "e0fdfb88-9683-4cc2-95f1-6ab55c558dfd"). InnerVolumeSpecName "kube-api-access-85r5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.189519 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85r5h\" (UniqueName: \"kubernetes.io/projected/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-kube-api-access-85r5h\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.189551 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.620228 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" event={"ID":"e0fdfb88-9683-4cc2-95f1-6ab55c558dfd","Type":"ContainerDied","Data":"955f18c841a8c16fdc8a34bb8452d5379275418e606966b811bfd890a335150b"} Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.620667 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955f18c841a8c16fdc8a34bb8452d5379275418e606966b811bfd890a335150b" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.621119 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c783-account-create-update-c8k62" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.624239 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerStarted","Data":"2061b24c8461fbcce0ec47e59e7dc4cc77062ce142faa619d444cd1d0c09e5c8"} Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.626754 4845 generic.go:334] "Generic (PLEG): container finished" podID="8167c688-97fc-4a4b-9f1f-b0b037c86a9a" containerID="13f6f84ab8aba03eaa86df418135b90d82987c32100a7069900fe6528abad51b" exitCode=0 Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.626802 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gxb44" event={"ID":"8167c688-97fc-4a4b-9f1f-b0b037c86a9a","Type":"ContainerDied","Data":"13f6f84ab8aba03eaa86df418135b90d82987c32100a7069900fe6528abad51b"} Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.653254 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.211530805 podStartE2EDuration="59.653230062s" podCreationTimestamp="2026-02-02 10:50:49 +0000 UTC" firstStartedPulling="2026-02-02 10:51:04.944360575 +0000 UTC m=+1146.035762025" lastFinishedPulling="2026-02-02 10:51:47.386059832 +0000 UTC m=+1188.477461282" observedRunningTime="2026-02-02 10:51:48.650691579 +0000 UTC m=+1189.742093049" watchObservedRunningTime="2026-02-02 10:51:48.653230062 +0000 UTC m=+1189.744631512" Feb 02 10:51:48 crc kubenswrapper[4845]: I0202 10:51:48.883092 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tt4db" podUID="72da7703-b176-47cb-953e-de037d663c55" containerName="ovn-controller" probeResult="failure" output=< Feb 02 10:51:48 crc kubenswrapper[4845]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 10:51:48 crc kubenswrapper[4845]: > Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.151486 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.223538 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:51:50 crc kubenswrapper[4845]: E0202 10:51:50.224084 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fdfb88-9683-4cc2-95f1-6ab55c558dfd" containerName="mariadb-account-create-update" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.224106 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fdfb88-9683-4cc2-95f1-6ab55c558dfd" containerName="mariadb-account-create-update" Feb 02 10:51:50 crc kubenswrapper[4845]: E0202 10:51:50.224124 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167c688-97fc-4a4b-9f1f-b0b037c86a9a" containerName="mariadb-account-create-update" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.224132 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167c688-97fc-4a4b-9f1f-b0b037c86a9a" containerName="mariadb-account-create-update" Feb 02 10:51:50 crc kubenswrapper[4845]: E0202 10:51:50.224153 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712e6155-a77e-4f9c-9d55-a6edab62e9a7" containerName="mariadb-database-create" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.224160 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="712e6155-a77e-4f9c-9d55-a6edab62e9a7" containerName="mariadb-database-create" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.225030 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167c688-97fc-4a4b-9f1f-b0b037c86a9a" containerName="mariadb-account-create-update" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.225062 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fdfb88-9683-4cc2-95f1-6ab55c558dfd" containerName="mariadb-account-create-update" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.225128 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="712e6155-a77e-4f9c-9d55-a6edab62e9a7" containerName="mariadb-database-create" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.226232 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.229140 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.240864 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.258554 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7f7p\" (UniqueName: \"kubernetes.io/projected/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-kube-api-access-w7f7p\") pod \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\" (UID: \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\") " Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.258771 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-operator-scripts\") pod \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\" (UID: \"8167c688-97fc-4a4b-9f1f-b0b037c86a9a\") " Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.259321 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8167c688-97fc-4a4b-9f1f-b0b037c86a9a" (UID: "8167c688-97fc-4a4b-9f1f-b0b037c86a9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.267204 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-kube-api-access-w7f7p" (OuterVolumeSpecName: "kube-api-access-w7f7p") pod "8167c688-97fc-4a4b-9f1f-b0b037c86a9a" (UID: "8167c688-97fc-4a4b-9f1f-b0b037c86a9a"). InnerVolumeSpecName "kube-api-access-w7f7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.361412 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjmd\" (UniqueName: \"kubernetes.io/projected/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-kube-api-access-5zjmd\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.361553 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.361580 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-config-data\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.361702 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7f7p\" (UniqueName: \"kubernetes.io/projected/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-kube-api-access-w7f7p\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.361719 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8167c688-97fc-4a4b-9f1f-b0b037c86a9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.463134 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.463189 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-config-data\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.463421 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zjmd\" (UniqueName: \"kubernetes.io/projected/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-kube-api-access-5zjmd\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.471072 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-config-data\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.489982 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.496028 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zjmd\" (UniqueName: \"kubernetes.io/projected/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-kube-api-access-5zjmd\") pod \"mysqld-exporter-0\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.561697 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.589592 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.589678 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.591237 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.655702 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gxb44" event={"ID":"8167c688-97fc-4a4b-9f1f-b0b037c86a9a","Type":"ContainerDied","Data":"ce34701895725b3d5a3763408473ead104dae5a66310eff73d3f6c184da68a43"} Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.655764 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce34701895725b3d5a3763408473ead104dae5a66310eff73d3f6c184da68a43" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.655955 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gxb44" Feb 02 10:51:50 crc kubenswrapper[4845]: I0202 10:51:50.657542 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 02 10:51:51 crc kubenswrapper[4845]: I0202 10:51:51.117708 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:51:51 crc kubenswrapper[4845]: W0202 10:51:51.119920 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podada4f3a2_2715_4c0c_bc32_5c488a2e1996.slice/crio-4f3a714246c19659b6b750ef01e05f53d25576adc8d172e3db972b276255f379 WatchSource:0}: Error finding container 4f3a714246c19659b6b750ef01e05f53d25576adc8d172e3db972b276255f379: Status 404 returned error can't find the container with id 4f3a714246c19659b6b750ef01e05f53d25576adc8d172e3db972b276255f379 Feb 02 10:51:51 crc kubenswrapper[4845]: I0202 10:51:51.667633 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ada4f3a2-2715-4c0c-bc32-5c488a2e1996","Type":"ContainerStarted","Data":"4f3a714246c19659b6b750ef01e05f53d25576adc8d172e3db972b276255f379"} Feb 02 10:51:52 crc kubenswrapper[4845]: I0202 10:51:52.685235 4845 generic.go:334] "Generic (PLEG): container finished" podID="acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" containerID="de36731024371d8d225acca93d7d017177a15b01d17e1764ca4da25552e5472e" exitCode=0 Feb 02 10:51:52 crc kubenswrapper[4845]: I0202 10:51:52.685341 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fwkp8" event={"ID":"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c","Type":"ContainerDied","Data":"de36731024371d8d225acca93d7d017177a15b01d17e1764ca4da25552e5472e"} Feb 02 10:51:53 crc kubenswrapper[4845]: I0202 10:51:53.894711 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:51:53 crc kubenswrapper[4845]: I0202 10:51:53.895320 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="prometheus" containerID="cri-o://953536ca819321a406c08f163b4b6ff072898362e02b3adbe579d686cddbce8e" gracePeriod=600 Feb 02 10:51:53 crc kubenswrapper[4845]: I0202 10:51:53.895547 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="thanos-sidecar" containerID="cri-o://2061b24c8461fbcce0ec47e59e7dc4cc77062ce142faa619d444cd1d0c09e5c8" gracePeriod=600 Feb 02 10:51:53 crc kubenswrapper[4845]: I0202 10:51:53.895713 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="config-reloader" containerID="cri-o://a90515d7d5931387eadbfcaa9c4213064cf00c7a915781e7cf67147b71f9ba30" gracePeriod=600 Feb 02 10:51:53 crc kubenswrapper[4845]: I0202 10:51:53.912213 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tt4db" podUID="72da7703-b176-47cb-953e-de037d663c55" containerName="ovn-controller" probeResult="failure" output=< Feb 02 10:51:53 crc kubenswrapper[4845]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 10:51:53 crc kubenswrapper[4845]: > Feb 02 10:51:53 crc kubenswrapper[4845]: I0202 10:51:53.956053 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:51:53 crc kubenswrapper[4845]: I0202 10:51:53.968190 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9qwr2" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.170413 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2e45ad6a-20f4-4da2-82b7-500ed29a0cd5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.201929 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tt4db-config-n8v8r"] Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.203468 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.207290 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.215758 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tt4db-config-n8v8r"] Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.255449 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="a61fa08e-868a-4415-88d5-7ed0eebbeb45" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.355977 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jb5d\" (UniqueName: \"kubernetes.io/projected/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-kube-api-access-6jb5d\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.356200 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run-ovn\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.356346 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.356398 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-additional-scripts\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.356439 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-log-ovn\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.356632 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-scripts\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.459013 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jb5d\" (UniqueName: \"kubernetes.io/projected/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-kube-api-access-6jb5d\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.459072 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run-ovn\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.459157 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.459199 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-additional-scripts\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.459232 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-log-ovn\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.459342 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-scripts\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.461220 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.461586 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run-ovn\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.461677 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-log-ovn\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.462181 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-additional-scripts\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.462698 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-scripts\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.490566 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jb5d\" (UniqueName: \"kubernetes.io/projected/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-kube-api-access-6jb5d\") pod \"ovn-controller-tt4db-config-n8v8r\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.515951 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="d0a3a285-364a-4df2-8a7c-947ff673f254" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.552653 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.615108 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.758744 4845 generic.go:334] "Generic (PLEG): container finished" podID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerID="2061b24c8461fbcce0ec47e59e7dc4cc77062ce142faa619d444cd1d0c09e5c8" exitCode=0 Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.758795 4845 generic.go:334] "Generic (PLEG): container finished" podID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerID="a90515d7d5931387eadbfcaa9c4213064cf00c7a915781e7cf67147b71f9ba30" exitCode=0 Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.758805 4845 generic.go:334] "Generic (PLEG): container finished" podID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerID="953536ca819321a406c08f163b4b6ff072898362e02b3adbe579d686cddbce8e" exitCode=0 Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.759988 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerDied","Data":"2061b24c8461fbcce0ec47e59e7dc4cc77062ce142faa619d444cd1d0c09e5c8"} Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.760057 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerDied","Data":"a90515d7d5931387eadbfcaa9c4213064cf00c7a915781e7cf67147b71f9ba30"} Feb 02 10:51:54 crc kubenswrapper[4845]: I0202 10:51:54.760075 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerDied","Data":"953536ca819321a406c08f163b4b6ff072898362e02b3adbe579d686cddbce8e"} Feb 02 10:51:55 crc kubenswrapper[4845]: I0202 10:51:55.589920 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Feb 02 10:51:55 crc kubenswrapper[4845]: I0202 10:51:55.923477 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gxb44"] Feb 02 10:51:55 crc kubenswrapper[4845]: I0202 10:51:55.932572 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gxb44"] Feb 02 10:51:57 crc kubenswrapper[4845]: I0202 10:51:57.727158 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8167c688-97fc-4a4b-9f1f-b0b037c86a9a" path="/var/lib/kubelet/pods/8167c688-97fc-4a4b-9f1f-b0b037c86a9a/volumes" Feb 02 10:51:58 crc kubenswrapper[4845]: I0202 10:51:58.897245 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tt4db" podUID="72da7703-b176-47cb-953e-de037d663c55" containerName="ovn-controller" probeResult="failure" output=< Feb 02 10:51:58 crc kubenswrapper[4845]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 10:51:58 crc kubenswrapper[4845]: > Feb 02 10:52:00 crc kubenswrapper[4845]: I0202 10:52:00.590094 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Feb 02 10:52:00 crc kubenswrapper[4845]: I0202 10:52:00.942550 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qqb26"] Feb 02 10:52:00 crc kubenswrapper[4845]: I0202 10:52:00.945141 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:00 crc kubenswrapper[4845]: I0202 10:52:00.948714 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 10:52:00 crc kubenswrapper[4845]: I0202 10:52:00.950845 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qqb26"] Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.103300 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62b9529c-8c20-47e9-8c19-910a31b30683-operator-scripts\") pod \"root-account-create-update-qqb26\" (UID: \"62b9529c-8c20-47e9-8c19-910a31b30683\") " pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.103457 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjdkt\" (UniqueName: \"kubernetes.io/projected/62b9529c-8c20-47e9-8c19-910a31b30683-kube-api-access-rjdkt\") pod \"root-account-create-update-qqb26\" (UID: \"62b9529c-8c20-47e9-8c19-910a31b30683\") " pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.205412 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjdkt\" (UniqueName: \"kubernetes.io/projected/62b9529c-8c20-47e9-8c19-910a31b30683-kube-api-access-rjdkt\") pod \"root-account-create-update-qqb26\" (UID: \"62b9529c-8c20-47e9-8c19-910a31b30683\") " pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.205583 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62b9529c-8c20-47e9-8c19-910a31b30683-operator-scripts\") pod \"root-account-create-update-qqb26\" (UID: \"62b9529c-8c20-47e9-8c19-910a31b30683\") " pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.206431 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62b9529c-8c20-47e9-8c19-910a31b30683-operator-scripts\") pod \"root-account-create-update-qqb26\" (UID: \"62b9529c-8c20-47e9-8c19-910a31b30683\") " pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.245664 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjdkt\" (UniqueName: \"kubernetes.io/projected/62b9529c-8c20-47e9-8c19-910a31b30683-kube-api-access-rjdkt\") pod \"root-account-create-update-qqb26\" (UID: \"62b9529c-8c20-47e9-8c19-910a31b30683\") " pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.297783 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.506367 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.612504 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-etc-swift\") pod \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.612808 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-combined-ca-bundle\") pod \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.612875 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-ring-data-devices\") pod \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.612927 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-swiftconf\") pod \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.613028 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-scripts\") pod \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.613057 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v29f4\" (UniqueName: \"kubernetes.io/projected/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-kube-api-access-v29f4\") pod \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.613112 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-dispersionconf\") pod \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\" (UID: \"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.613665 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" (UID: "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.615676 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" (UID: "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.678163 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-kube-api-access-v29f4" (OuterVolumeSpecName: "kube-api-access-v29f4") pod "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" (UID: "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c"). InnerVolumeSpecName "kube-api-access-v29f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.682206 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" (UID: "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.712499 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" (UID: "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.715868 4845 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.715909 4845 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.715920 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.715929 4845 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.715939 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v29f4\" (UniqueName: \"kubernetes.io/projected/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-kube-api-access-v29f4\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.741356 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" (UID: "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.742152 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-scripts" (OuterVolumeSpecName: "scripts") pod "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" (UID: "acbaf357-af6c-46b6-b6f0-de2b6e4ee44c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.809338 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.818442 4845 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.818478 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acbaf357-af6c-46b6-b6f0-de2b6e4ee44c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.867739 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fwkp8" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.867746 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fwkp8" event={"ID":"acbaf357-af6c-46b6-b6f0-de2b6e4ee44c","Type":"ContainerDied","Data":"b3336106af570a904c4c4f983f9ee425832488db515a8996781482a3bc1d2732"} Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.867793 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3336106af570a904c4c4f983f9ee425832488db515a8996781482a3bc1d2732" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.895327 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b04f366-8a31-4d2e-8d11-e8682d578a07","Type":"ContainerDied","Data":"f66ab08a88fdf01ed8eac1ea6cefb40d4702621c1aec3526c050777cfd6e0be7"} Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.895433 4845 scope.go:117] "RemoveContainer" containerID="2061b24c8461fbcce0ec47e59e7dc4cc77062ce142faa619d444cd1d0c09e5c8" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.895761 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.913269 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ada4f3a2-2715-4c0c-bc32-5c488a2e1996","Type":"ContainerStarted","Data":"995f2b7a30c667d098b78f0a5f78fb72fb30f1f5da96bbc6a50e3a4f536e40bb"} Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.921273 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-web-config\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.921414 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-2\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.921509 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-tls-assets\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.921601 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-thanos-prometheus-http-client-file\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.921790 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b04f366-8a31-4d2e-8d11-e8682d578a07-config-out\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.922610 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.923123 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md5br\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-kube-api-access-md5br\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.923194 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-config\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.923483 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.923578 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-0\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.927472 4845 scope.go:117] "RemoveContainer" containerID="a90515d7d5931387eadbfcaa9c4213064cf00c7a915781e7cf67147b71f9ba30" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.928400 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.929435 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-config" (OuterVolumeSpecName: "config") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.929607 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b04f366-8a31-4d2e-8d11-e8682d578a07-config-out" (OuterVolumeSpecName: "config-out") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.934570 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-1\") pod \"9b04f366-8a31-4d2e-8d11-e8682d578a07\" (UID: \"9b04f366-8a31-4d2e-8d11-e8682d578a07\") " Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.942408 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-kube-api-access-md5br" (OuterVolumeSpecName: "kube-api-access-md5br") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "kube-api-access-md5br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.942600 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.944181 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947241 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947724 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md5br\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-kube-api-access-md5br\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947757 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947771 4845 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947789 4845 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947804 4845 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9b04f366-8a31-4d2e-8d11-e8682d578a07-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947815 4845 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b04f366-8a31-4d2e-8d11-e8682d578a07-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947826 4845 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.947835 4845 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b04f366-8a31-4d2e-8d11-e8682d578a07-config-out\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.953672 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.769515347 podStartE2EDuration="11.953651727s" podCreationTimestamp="2026-02-02 10:51:50 +0000 UTC" firstStartedPulling="2026-02-02 10:51:51.126342004 +0000 UTC m=+1192.217743454" lastFinishedPulling="2026-02-02 10:52:01.310478384 +0000 UTC m=+1202.401879834" observedRunningTime="2026-02-02 10:52:01.943391615 +0000 UTC m=+1203.034793065" watchObservedRunningTime="2026-02-02 10:52:01.953651727 +0000 UTC m=+1203.045053177" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.981516 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-web-config" (OuterVolumeSpecName: "web-config") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:01 crc kubenswrapper[4845]: I0202 10:52:01.985949 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9b04f366-8a31-4d2e-8d11-e8682d578a07" (UID: "9b04f366-8a31-4d2e-8d11-e8682d578a07"). InnerVolumeSpecName "pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.037051 4845 scope.go:117] "RemoveContainer" containerID="953536ca819321a406c08f163b4b6ff072898362e02b3adbe579d686cddbce8e" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.055055 4845 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") on node \"crc\" " Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.055101 4845 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b04f366-8a31-4d2e-8d11-e8682d578a07-web-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.092879 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tt4db-config-n8v8r"] Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.107520 4845 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.108199 4845 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937") on node "crc" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.114372 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qqb26"] Feb 02 10:52:02 crc kubenswrapper[4845]: W0202 10:52:02.128029 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b9529c_8c20_47e9_8c19_910a31b30683.slice/crio-46b1e7cd3db604e0df6f962e129a926ea3c2ab65a02772382c80d9cbafe9cef0 WatchSource:0}: Error finding container 46b1e7cd3db604e0df6f962e129a926ea3c2ab65a02772382c80d9cbafe9cef0: Status 404 returned error can't find the container with id 46b1e7cd3db604e0df6f962e129a926ea3c2ab65a02772382c80d9cbafe9cef0 Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.138570 4845 scope.go:117] "RemoveContainer" containerID="f241191074a6c7fafb8932f36a972acd0bd84c8ac50c73e751b3fdd46aa2e817" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.158935 4845 reconciler_common.go:293] "Volume detached for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.385933 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.405060 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422088 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:52:02 crc kubenswrapper[4845]: E0202 10:52:02.422509 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" containerName="swift-ring-rebalance" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422521 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" containerName="swift-ring-rebalance" Feb 02 10:52:02 crc kubenswrapper[4845]: E0202 10:52:02.422530 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="init-config-reloader" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422536 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="init-config-reloader" Feb 02 10:52:02 crc kubenswrapper[4845]: E0202 10:52:02.422545 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="config-reloader" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422553 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="config-reloader" Feb 02 10:52:02 crc kubenswrapper[4845]: E0202 10:52:02.422569 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="thanos-sidecar" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422575 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="thanos-sidecar" Feb 02 10:52:02 crc kubenswrapper[4845]: E0202 10:52:02.422592 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="prometheus" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422597 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="prometheus" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422805 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="prometheus" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422819 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="thanos-sidecar" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422832 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="acbaf357-af6c-46b6-b6f0-de2b6e4ee44c" containerName="swift-ring-rebalance" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.422838 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" containerName="config-reloader" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.424673 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.431870 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.432208 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.432403 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.434210 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.434475 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.434521 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.434772 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.435516 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.438959 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-wp8jb" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.443772 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571508 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31859db3-3de0-46d0-a81b-b951f1d45279-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571555 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571582 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571656 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571690 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571728 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571759 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31859db3-3de0-46d0-a81b-b951f1d45279-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571779 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571806 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571827 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571866 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-config\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571962 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7qv\" (UniqueName: \"kubernetes.io/projected/31859db3-3de0-46d0-a81b-b951f1d45279-kube-api-access-jg7qv\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.571989 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.572027 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.591472 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d6db6e42-984a-484b-9f90-e6efa9817f37-etc-swift\") pod \"swift-storage-0\" (UID: \"d6db6e42-984a-484b-9f90-e6efa9817f37\") " pod="openstack/swift-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.673490 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.674860 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.675001 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.675123 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31859db3-3de0-46d0-a81b-b951f1d45279-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.675250 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.675354 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.675484 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-config\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.675691 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg7qv\" (UniqueName: \"kubernetes.io/projected/31859db3-3de0-46d0-a81b-b951f1d45279-kube-api-access-jg7qv\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.675803 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.676448 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.676585 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31859db3-3de0-46d0-a81b-b951f1d45279-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.676735 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.676838 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.678253 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.676463 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.682820 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.682862 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9b560f795087ddb8e1c0fbe0076d2f0e9dba0d3739abc904f350829f75b851b7/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.684433 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.687089 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.687913 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-config\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.688032 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31859db3-3de0-46d0-a81b-b951f1d45279-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.689998 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.691489 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/31859db3-3de0-46d0-a81b-b951f1d45279-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.712532 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.720286 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31859db3-3de0-46d0-a81b-b951f1d45279-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.721853 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.725829 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg7qv\" (UniqueName: \"kubernetes.io/projected/31859db3-3de0-46d0-a81b-b951f1d45279-kube-api-access-jg7qv\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.721870 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31859db3-3de0-46d0-a81b-b951f1d45279-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.768194 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c84ebbd0-5f32-4774-9e9f-fdc701180937\") pod \"prometheus-metric-storage-0\" (UID: \"31859db3-3de0-46d0-a81b-b951f1d45279\") " pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.928192 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kgn95" event={"ID":"34877df4-b654-4e0c-ac67-da6fd95c249d","Type":"ContainerStarted","Data":"3a7ec3be2e02a83d1849c451b822789235edc5a4672079139f59894d6d036a70"} Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.940629 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tt4db-config-n8v8r" event={"ID":"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7","Type":"ContainerStarted","Data":"2fbcebcb43f4c40bf6cc312cfba0d8248bb4757b182770fa199288683be575a0"} Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.940680 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tt4db-config-n8v8r" event={"ID":"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7","Type":"ContainerStarted","Data":"34e5f3fc33a4028b7075045963bff9f4ca350c1123047d998cc5412b97fcacd8"} Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.944984 4845 generic.go:334] "Generic (PLEG): container finished" podID="62b9529c-8c20-47e9-8c19-910a31b30683" containerID="8f242d53b8b45b41f78ed81a7dfba88bead6b72ed12835b100468efa95d3864d" exitCode=0 Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.947926 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qqb26" event={"ID":"62b9529c-8c20-47e9-8c19-910a31b30683","Type":"ContainerDied","Data":"8f242d53b8b45b41f78ed81a7dfba88bead6b72ed12835b100468efa95d3864d"} Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.947987 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qqb26" event={"ID":"62b9529c-8c20-47e9-8c19-910a31b30683","Type":"ContainerStarted","Data":"46b1e7cd3db604e0df6f962e129a926ea3c2ab65a02772382c80d9cbafe9cef0"} Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.950106 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kgn95" podStartSLOduration=2.69404387 podStartE2EDuration="19.950086028s" podCreationTimestamp="2026-02-02 10:51:43 +0000 UTC" firstStartedPulling="2026-02-02 10:51:44.091280028 +0000 UTC m=+1185.182681478" lastFinishedPulling="2026-02-02 10:52:01.347322186 +0000 UTC m=+1202.438723636" observedRunningTime="2026-02-02 10:52:02.947579456 +0000 UTC m=+1204.038980906" watchObservedRunningTime="2026-02-02 10:52:02.950086028 +0000 UTC m=+1204.041487478" Feb 02 10:52:02 crc kubenswrapper[4845]: I0202 10:52:02.969221 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-tt4db-config-n8v8r" podStartSLOduration=8.969200803 podStartE2EDuration="8.969200803s" podCreationTimestamp="2026-02-02 10:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:02.9687255 +0000 UTC m=+1204.060126980" watchObservedRunningTime="2026-02-02 10:52:02.969200803 +0000 UTC m=+1204.060602253" Feb 02 10:52:03 crc kubenswrapper[4845]: I0202 10:52:03.055765 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:03 crc kubenswrapper[4845]: W0202 10:52:03.474641 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6db6e42_984a_484b_9f90_e6efa9817f37.slice/crio-bb98dfc1bdb4a446eae63e770a43a380562aa21e2db2e8c9e29801ba96768172 WatchSource:0}: Error finding container bb98dfc1bdb4a446eae63e770a43a380562aa21e2db2e8c9e29801ba96768172: Status 404 returned error can't find the container with id bb98dfc1bdb4a446eae63e770a43a380562aa21e2db2e8c9e29801ba96768172 Feb 02 10:52:03 crc kubenswrapper[4845]: I0202 10:52:03.477240 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:52:03 crc kubenswrapper[4845]: I0202 10:52:03.486784 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 10:52:03 crc kubenswrapper[4845]: I0202 10:52:03.663560 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 10:52:03 crc kubenswrapper[4845]: I0202 10:52:03.734057 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b04f366-8a31-4d2e-8d11-e8682d578a07" path="/var/lib/kubelet/pods/9b04f366-8a31-4d2e-8d11-e8682d578a07/volumes" Feb 02 10:52:03 crc kubenswrapper[4845]: I0202 10:52:03.975598 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"bb98dfc1bdb4a446eae63e770a43a380562aa21e2db2e8c9e29801ba96768172"} Feb 02 10:52:03 crc kubenswrapper[4845]: I0202 10:52:03.981456 4845 generic.go:334] "Generic (PLEG): container finished" podID="0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" containerID="2fbcebcb43f4c40bf6cc312cfba0d8248bb4757b182770fa199288683be575a0" exitCode=0 Feb 02 10:52:03 crc kubenswrapper[4845]: I0202 10:52:03.981564 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tt4db-config-n8v8r" event={"ID":"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7","Type":"ContainerDied","Data":"2fbcebcb43f4c40bf6cc312cfba0d8248bb4757b182770fa199288683be575a0"} Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.001341 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31859db3-3de0-46d0-a81b-b951f1d45279","Type":"ContainerStarted","Data":"78313a6e4ad0baac042d452b1cef704646e4f126bb940a0867bc4032b6af8ae4"} Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.033535 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-tt4db" Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.173127 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.258091 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.520393 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.526372 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.640733 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjdkt\" (UniqueName: \"kubernetes.io/projected/62b9529c-8c20-47e9-8c19-910a31b30683-kube-api-access-rjdkt\") pod \"62b9529c-8c20-47e9-8c19-910a31b30683\" (UID: \"62b9529c-8c20-47e9-8c19-910a31b30683\") " Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.641105 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62b9529c-8c20-47e9-8c19-910a31b30683-operator-scripts\") pod \"62b9529c-8c20-47e9-8c19-910a31b30683\" (UID: \"62b9529c-8c20-47e9-8c19-910a31b30683\") " Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.645242 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b9529c-8c20-47e9-8c19-910a31b30683-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62b9529c-8c20-47e9-8c19-910a31b30683" (UID: "62b9529c-8c20-47e9-8c19-910a31b30683"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.678366 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b9529c-8c20-47e9-8c19-910a31b30683-kube-api-access-rjdkt" (OuterVolumeSpecName: "kube-api-access-rjdkt") pod "62b9529c-8c20-47e9-8c19-910a31b30683" (UID: "62b9529c-8c20-47e9-8c19-910a31b30683"). InnerVolumeSpecName "kube-api-access-rjdkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.743534 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62b9529c-8c20-47e9-8c19-910a31b30683-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:04 crc kubenswrapper[4845]: I0202 10:52:04.743560 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjdkt\" (UniqueName: \"kubernetes.io/projected/62b9529c-8c20-47e9-8c19-910a31b30683-kube-api-access-rjdkt\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.012149 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qqb26" event={"ID":"62b9529c-8c20-47e9-8c19-910a31b30683","Type":"ContainerDied","Data":"46b1e7cd3db604e0df6f962e129a926ea3c2ab65a02772382c80d9cbafe9cef0"} Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.012221 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b1e7cd3db604e0df6f962e129a926ea3c2ab65a02772382c80d9cbafe9cef0" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.012165 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qqb26" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.507100 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.559701 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run\") pod \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.559763 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-additional-scripts\") pod \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.559799 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run" (OuterVolumeSpecName: "var-run") pod "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" (UID: "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.559875 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jb5d\" (UniqueName: \"kubernetes.io/projected/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-kube-api-access-6jb5d\") pod \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.559935 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run-ovn\") pod \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.559961 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-scripts\") pod \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.560084 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-log-ovn\") pod \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\" (UID: \"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7\") " Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.560299 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" (UID: "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.560495 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" (UID: "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.560802 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" (UID: "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.560921 4845 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.560941 4845 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.560956 4845 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.561658 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-scripts" (OuterVolumeSpecName: "scripts") pod "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" (UID: "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.570427 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-kube-api-access-6jb5d" (OuterVolumeSpecName: "kube-api-access-6jb5d") pod "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" (UID: "0bb68da2-57ab-4c1b-8e95-e1d434d6cae7"). InnerVolumeSpecName "kube-api-access-6jb5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.663650 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.664052 4845 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:05 crc kubenswrapper[4845]: I0202 10:52:05.664154 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jb5d\" (UniqueName: \"kubernetes.io/projected/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7-kube-api-access-6jb5d\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:06 crc kubenswrapper[4845]: I0202 10:52:06.030033 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tt4db-config-n8v8r" event={"ID":"0bb68da2-57ab-4c1b-8e95-e1d434d6cae7","Type":"ContainerDied","Data":"34e5f3fc33a4028b7075045963bff9f4ca350c1123047d998cc5412b97fcacd8"} Feb 02 10:52:06 crc kubenswrapper[4845]: I0202 10:52:06.030373 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34e5f3fc33a4028b7075045963bff9f4ca350c1123047d998cc5412b97fcacd8" Feb 02 10:52:06 crc kubenswrapper[4845]: I0202 10:52:06.030082 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tt4db-config-n8v8r" Feb 02 10:52:06 crc kubenswrapper[4845]: I0202 10:52:06.649617 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-tt4db-config-n8v8r"] Feb 02 10:52:06 crc kubenswrapper[4845]: I0202 10:52:06.672852 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-tt4db-config-n8v8r"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.033770 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-h6qld"] Feb 02 10:52:07 crc kubenswrapper[4845]: E0202 10:52:07.035018 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" containerName="ovn-config" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.035093 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" containerName="ovn-config" Feb 02 10:52:07 crc kubenswrapper[4845]: E0202 10:52:07.035228 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b9529c-8c20-47e9-8c19-910a31b30683" containerName="mariadb-account-create-update" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.035277 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b9529c-8c20-47e9-8c19-910a31b30683" containerName="mariadb-account-create-update" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.050651 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b9529c-8c20-47e9-8c19-910a31b30683" containerName="mariadb-account-create-update" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.050702 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" containerName="ovn-config" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.052081 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.088651 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"c2fbcc4a31322cd87e30f6eef32d4520586ba907ceec6affbbcf35b9ccacc481"} Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.089474 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"5a8e3bf190cdbf1461764cc3a1076f233459488dc63b68121c7bcff0821c1a42"} Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.089625 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"6d8eda2f411a05e52cc68867032f30a625ae5ae33c0a3cc3a1214aedda45e09b"} Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.108682 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f729k\" (UniqueName: \"kubernetes.io/projected/ad1fe923-0409-4c3c-869c-9d0c09a2506a-kube-api-access-f729k\") pod \"cinder-db-create-h6qld\" (UID: \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\") " pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.109239 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1fe923-0409-4c3c-869c-9d0c09a2506a-operator-scripts\") pod \"cinder-db-create-h6qld\" (UID: \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\") " pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.115689 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h6qld"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.211665 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f729k\" (UniqueName: \"kubernetes.io/projected/ad1fe923-0409-4c3c-869c-9d0c09a2506a-kube-api-access-f729k\") pod \"cinder-db-create-h6qld\" (UID: \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\") " pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.212174 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1fe923-0409-4c3c-869c-9d0c09a2506a-operator-scripts\") pod \"cinder-db-create-h6qld\" (UID: \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\") " pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.213045 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1fe923-0409-4c3c-869c-9d0c09a2506a-operator-scripts\") pod \"cinder-db-create-h6qld\" (UID: \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\") " pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.254532 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f729k\" (UniqueName: \"kubernetes.io/projected/ad1fe923-0409-4c3c-869c-9d0c09a2506a-kube-api-access-f729k\") pod \"cinder-db-create-h6qld\" (UID: \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\") " pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.258870 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cba3-account-create-update-bph8b"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.264498 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.271826 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.302627 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cba3-account-create-update-bph8b"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.315871 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6csf\" (UniqueName: \"kubernetes.io/projected/9af47917-824a-452b-b0db-03ad3f4861df-kube-api-access-g6csf\") pod \"cinder-cba3-account-create-update-bph8b\" (UID: \"9af47917-824a-452b-b0db-03ad3f4861df\") " pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.315972 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af47917-824a-452b-b0db-03ad3f4861df-operator-scripts\") pod \"cinder-cba3-account-create-update-bph8b\" (UID: \"9af47917-824a-452b-b0db-03ad3f4861df\") " pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.328603 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-wnlhd"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.330280 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.375112 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-wnlhd"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.393863 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.422107 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6csf\" (UniqueName: \"kubernetes.io/projected/9af47917-824a-452b-b0db-03ad3f4861df-kube-api-access-g6csf\") pod \"cinder-cba3-account-create-update-bph8b\" (UID: \"9af47917-824a-452b-b0db-03ad3f4861df\") " pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.422227 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af47917-824a-452b-b0db-03ad3f4861df-operator-scripts\") pod \"cinder-cba3-account-create-update-bph8b\" (UID: \"9af47917-824a-452b-b0db-03ad3f4861df\") " pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.422268 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89jm\" (UniqueName: \"kubernetes.io/projected/afec66f7-184b-44f1-a172-b1e78739309d-kube-api-access-s89jm\") pod \"barbican-db-create-wnlhd\" (UID: \"afec66f7-184b-44f1-a172-b1e78739309d\") " pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.422299 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afec66f7-184b-44f1-a172-b1e78739309d-operator-scripts\") pod \"barbican-db-create-wnlhd\" (UID: \"afec66f7-184b-44f1-a172-b1e78739309d\") " pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.423420 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af47917-824a-452b-b0db-03ad3f4861df-operator-scripts\") pod \"cinder-cba3-account-create-update-bph8b\" (UID: \"9af47917-824a-452b-b0db-03ad3f4861df\") " pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.456503 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-jbstq"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.457917 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.460331 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6csf\" (UniqueName: \"kubernetes.io/projected/9af47917-824a-452b-b0db-03ad3f4861df-kube-api-access-g6csf\") pod \"cinder-cba3-account-create-update-bph8b\" (UID: \"9af47917-824a-452b-b0db-03ad3f4861df\") " pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.466657 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.467120 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.467646 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4r6h5" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.468684 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.478021 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jbstq"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.517476 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-8ggwt"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.518871 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.524392 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-combined-ca-bundle\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.524481 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s89jm\" (UniqueName: \"kubernetes.io/projected/afec66f7-184b-44f1-a172-b1e78739309d-kube-api-access-s89jm\") pod \"barbican-db-create-wnlhd\" (UID: \"afec66f7-184b-44f1-a172-b1e78739309d\") " pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.524519 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afec66f7-184b-44f1-a172-b1e78739309d-operator-scripts\") pod \"barbican-db-create-wnlhd\" (UID: \"afec66f7-184b-44f1-a172-b1e78739309d\") " pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.524544 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-config-data\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.524571 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n52gp\" (UniqueName: \"kubernetes.io/projected/967b449a-1414-4a5c-b625-bcaf12b17ade-kube-api-access-n52gp\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.525437 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afec66f7-184b-44f1-a172-b1e78739309d-operator-scripts\") pod \"barbican-db-create-wnlhd\" (UID: \"afec66f7-184b-44f1-a172-b1e78739309d\") " pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.556853 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-8ggwt"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.567629 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89jm\" (UniqueName: \"kubernetes.io/projected/afec66f7-184b-44f1-a172-b1e78739309d-kube-api-access-s89jm\") pod \"barbican-db-create-wnlhd\" (UID: \"afec66f7-184b-44f1-a172-b1e78739309d\") " pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.577488 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-edbb-account-create-update-gt7ll"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.578783 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.581181 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.595870 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-edbb-account-create-update-gt7ll"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.610453 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.611879 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ee84-account-create-update-f2n87"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.613163 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.617201 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.620207 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ee84-account-create-update-f2n87"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.636293 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-operator-scripts\") pod \"heat-edbb-account-create-update-gt7ll\" (UID: \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\") " pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.636354 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-config-data\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.636387 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g49zl\" (UniqueName: \"kubernetes.io/projected/367466e2-34f1-4f2c-9e11-eb6c24c5318c-kube-api-access-g49zl\") pod \"heat-db-create-8ggwt\" (UID: \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\") " pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.636529 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n52gp\" (UniqueName: \"kubernetes.io/projected/967b449a-1414-4a5c-b625-bcaf12b17ade-kube-api-access-n52gp\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.636670 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thsbw\" (UniqueName: \"kubernetes.io/projected/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-kube-api-access-thsbw\") pod \"heat-edbb-account-create-update-gt7ll\" (UID: \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\") " pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.636794 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/367466e2-34f1-4f2c-9e11-eb6c24c5318c-operator-scripts\") pod \"heat-db-create-8ggwt\" (UID: \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\") " pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.637092 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-combined-ca-bundle\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.653840 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-combined-ca-bundle\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.654082 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-config-data\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.654786 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.686737 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n52gp\" (UniqueName: \"kubernetes.io/projected/967b449a-1414-4a5c-b625-bcaf12b17ade-kube-api-access-n52gp\") pod \"keystone-db-sync-jbstq\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.702098 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-phm7s"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.704317 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.739074 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-operator-scripts\") pod \"heat-edbb-account-create-update-gt7ll\" (UID: \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\") " pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.739146 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g49zl\" (UniqueName: \"kubernetes.io/projected/367466e2-34f1-4f2c-9e11-eb6c24c5318c-kube-api-access-g49zl\") pod \"heat-db-create-8ggwt\" (UID: \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\") " pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.739209 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb890e0-ca91-4204-8e4b-9036a64e56e1-operator-scripts\") pod \"barbican-ee84-account-create-update-f2n87\" (UID: \"efb890e0-ca91-4204-8e4b-9036a64e56e1\") " pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.739234 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thsbw\" (UniqueName: \"kubernetes.io/projected/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-kube-api-access-thsbw\") pod \"heat-edbb-account-create-update-gt7ll\" (UID: \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\") " pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.739279 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wmdn\" (UniqueName: \"kubernetes.io/projected/2cf66acf-0a94-4850-913b-711b19b88dd3-kube-api-access-9wmdn\") pod \"neutron-db-create-phm7s\" (UID: \"2cf66acf-0a94-4850-913b-711b19b88dd3\") " pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.739301 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/367466e2-34f1-4f2c-9e11-eb6c24c5318c-operator-scripts\") pod \"heat-db-create-8ggwt\" (UID: \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\") " pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.739378 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrbxm\" (UniqueName: \"kubernetes.io/projected/efb890e0-ca91-4204-8e4b-9036a64e56e1-kube-api-access-vrbxm\") pod \"barbican-ee84-account-create-update-f2n87\" (UID: \"efb890e0-ca91-4204-8e4b-9036a64e56e1\") " pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.739422 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf66acf-0a94-4850-913b-711b19b88dd3-operator-scripts\") pod \"neutron-db-create-phm7s\" (UID: \"2cf66acf-0a94-4850-913b-711b19b88dd3\") " pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.743228 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-operator-scripts\") pod \"heat-edbb-account-create-update-gt7ll\" (UID: \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\") " pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.743256 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/367466e2-34f1-4f2c-9e11-eb6c24c5318c-operator-scripts\") pod \"heat-db-create-8ggwt\" (UID: \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\") " pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.755142 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb68da2-57ab-4c1b-8e95-e1d434d6cae7" path="/var/lib/kubelet/pods/0bb68da2-57ab-4c1b-8e95-e1d434d6cae7/volumes" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.756222 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-phm7s"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.759394 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thsbw\" (UniqueName: \"kubernetes.io/projected/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-kube-api-access-thsbw\") pod \"heat-edbb-account-create-update-gt7ll\" (UID: \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\") " pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.764558 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g49zl\" (UniqueName: \"kubernetes.io/projected/367466e2-34f1-4f2c-9e11-eb6c24c5318c-kube-api-access-g49zl\") pod \"heat-db-create-8ggwt\" (UID: \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\") " pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.787081 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.810549 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.826917 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.841685 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrbxm\" (UniqueName: \"kubernetes.io/projected/efb890e0-ca91-4204-8e4b-9036a64e56e1-kube-api-access-vrbxm\") pod \"barbican-ee84-account-create-update-f2n87\" (UID: \"efb890e0-ca91-4204-8e4b-9036a64e56e1\") " pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.841763 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf66acf-0a94-4850-913b-711b19b88dd3-operator-scripts\") pod \"neutron-db-create-phm7s\" (UID: \"2cf66acf-0a94-4850-913b-711b19b88dd3\") " pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.841925 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb890e0-ca91-4204-8e4b-9036a64e56e1-operator-scripts\") pod \"barbican-ee84-account-create-update-f2n87\" (UID: \"efb890e0-ca91-4204-8e4b-9036a64e56e1\") " pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.841969 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wmdn\" (UniqueName: \"kubernetes.io/projected/2cf66acf-0a94-4850-913b-711b19b88dd3-kube-api-access-9wmdn\") pod \"neutron-db-create-phm7s\" (UID: \"2cf66acf-0a94-4850-913b-711b19b88dd3\") " pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.843337 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb890e0-ca91-4204-8e4b-9036a64e56e1-operator-scripts\") pod \"barbican-ee84-account-create-update-f2n87\" (UID: \"efb890e0-ca91-4204-8e4b-9036a64e56e1\") " pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.891429 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrbxm\" (UniqueName: \"kubernetes.io/projected/efb890e0-ca91-4204-8e4b-9036a64e56e1-kube-api-access-vrbxm\") pod \"barbican-ee84-account-create-update-f2n87\" (UID: \"efb890e0-ca91-4204-8e4b-9036a64e56e1\") " pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.913157 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3a8c-account-create-update-qmrlh"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.919936 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.922719 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.923625 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf66acf-0a94-4850-913b-711b19b88dd3-operator-scripts\") pod \"neutron-db-create-phm7s\" (UID: \"2cf66acf-0a94-4850-913b-711b19b88dd3\") " pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.941870 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a8c-account-create-update-qmrlh"] Feb 02 10:52:07 crc kubenswrapper[4845]: I0202 10:52:07.941990 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wmdn\" (UniqueName: \"kubernetes.io/projected/2cf66acf-0a94-4850-913b-711b19b88dd3-kube-api-access-9wmdn\") pod \"neutron-db-create-phm7s\" (UID: \"2cf66acf-0a94-4850-913b-711b19b88dd3\") " pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.045998 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e0fd8e-0f85-48be-b690-c11e3c09f340-operator-scripts\") pod \"neutron-3a8c-account-create-update-qmrlh\" (UID: \"37e0fd8e-0f85-48be-b690-c11e3c09f340\") " pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.046379 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6tc\" (UniqueName: \"kubernetes.io/projected/37e0fd8e-0f85-48be-b690-c11e3c09f340-kube-api-access-ql6tc\") pod \"neutron-3a8c-account-create-update-qmrlh\" (UID: \"37e0fd8e-0f85-48be-b690-c11e3c09f340\") " pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.110038 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h6qld"] Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.130319 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"5e847664210fc8ad1c87b28e652a97e08bd34261194bd9ae9a7266eb27ea4a77"} Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.148341 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6tc\" (UniqueName: \"kubernetes.io/projected/37e0fd8e-0f85-48be-b690-c11e3c09f340-kube-api-access-ql6tc\") pod \"neutron-3a8c-account-create-update-qmrlh\" (UID: \"37e0fd8e-0f85-48be-b690-c11e3c09f340\") " pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.148428 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e0fd8e-0f85-48be-b690-c11e3c09f340-operator-scripts\") pod \"neutron-3a8c-account-create-update-qmrlh\" (UID: \"37e0fd8e-0f85-48be-b690-c11e3c09f340\") " pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.151709 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.169949 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31859db3-3de0-46d0-a81b-b951f1d45279","Type":"ContainerStarted","Data":"fb8b139b6ecef2c8e8393a96d0038f98cb7a4d3638100daa0e3715cdd7f50c17"} Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.171154 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.229074 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e0fd8e-0f85-48be-b690-c11e3c09f340-operator-scripts\") pod \"neutron-3a8c-account-create-update-qmrlh\" (UID: \"37e0fd8e-0f85-48be-b690-c11e3c09f340\") " pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.233192 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6tc\" (UniqueName: \"kubernetes.io/projected/37e0fd8e-0f85-48be-b690-c11e3c09f340-kube-api-access-ql6tc\") pod \"neutron-3a8c-account-create-update-qmrlh\" (UID: \"37e0fd8e-0f85-48be-b690-c11e3c09f340\") " pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.255943 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.318995 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-wnlhd"] Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.666745 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cba3-account-create-update-bph8b"] Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.699068 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jbstq"] Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.766359 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-8ggwt"] Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.907443 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-edbb-account-create-update-gt7ll"] Feb 02 10:52:08 crc kubenswrapper[4845]: I0202 10:52:08.922623 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-phm7s"] Feb 02 10:52:08 crc kubenswrapper[4845]: W0202 10:52:08.926216 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45ea84b0_b5f2_4a74_8f6a_67b4176e5d1e.slice/crio-53c8908fbcc0591fbfb69956d0c0c3109bf14ab8287ebd837545ea519acfd6d4 WatchSource:0}: Error finding container 53c8908fbcc0591fbfb69956d0c0c3109bf14ab8287ebd837545ea519acfd6d4: Status 404 returned error can't find the container with id 53c8908fbcc0591fbfb69956d0c0c3109bf14ab8287ebd837545ea519acfd6d4 Feb 02 10:52:09 crc kubenswrapper[4845]: W0202 10:52:09.156625 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefb890e0_ca91_4204_8e4b_9036a64e56e1.slice/crio-56770c5d385d67ae06726838ff4816cd3884482b8958ac9f7af99eb30324dfed WatchSource:0}: Error finding container 56770c5d385d67ae06726838ff4816cd3884482b8958ac9f7af99eb30324dfed: Status 404 returned error can't find the container with id 56770c5d385d67ae06726838ff4816cd3884482b8958ac9f7af99eb30324dfed Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.159282 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ee84-account-create-update-f2n87"] Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.173785 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a8c-account-create-update-qmrlh"] Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.184591 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a8c-account-create-update-qmrlh" event={"ID":"37e0fd8e-0f85-48be-b690-c11e3c09f340","Type":"ContainerStarted","Data":"e0b6da0abcbdb37a5cad2eade5c52fcd8029eb38e9f1aa7e99827abac29549bb"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.185930 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ee84-account-create-update-f2n87" event={"ID":"efb890e0-ca91-4204-8e4b-9036a64e56e1","Type":"ContainerStarted","Data":"56770c5d385d67ae06726838ff4816cd3884482b8958ac9f7af99eb30324dfed"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.188525 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-phm7s" event={"ID":"2cf66acf-0a94-4850-913b-711b19b88dd3","Type":"ContainerStarted","Data":"8a50e4e7b2abb555310e50a778920bc7b8f7c931704bcd0016e368041f8db92c"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.191483 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wnlhd" event={"ID":"afec66f7-184b-44f1-a172-b1e78739309d","Type":"ContainerStarted","Data":"7d589ab274ae62a36101e94b25da0bcc5210eda8997714fcc496cd1866ddd622"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.191522 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wnlhd" event={"ID":"afec66f7-184b-44f1-a172-b1e78739309d","Type":"ContainerStarted","Data":"021b0148be56b73890ed92473db16080cb897f269c2e554a1690906e496aa7cc"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.196709 4845 generic.go:334] "Generic (PLEG): container finished" podID="ad1fe923-0409-4c3c-869c-9d0c09a2506a" containerID="c4fb16737964c587428fe338ca95d6abb864bb90373d40cc0d8bd05a89c69fe2" exitCode=0 Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.196855 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h6qld" event={"ID":"ad1fe923-0409-4c3c-869c-9d0c09a2506a","Type":"ContainerDied","Data":"c4fb16737964c587428fe338ca95d6abb864bb90373d40cc0d8bd05a89c69fe2"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.196881 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h6qld" event={"ID":"ad1fe923-0409-4c3c-869c-9d0c09a2506a","Type":"ContainerStarted","Data":"e44bcf835547d78301411ca32122b909f9e1e17afa231378a1944362c26b5d4c"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.199682 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cba3-account-create-update-bph8b" event={"ID":"9af47917-824a-452b-b0db-03ad3f4861df","Type":"ContainerStarted","Data":"c026c97f3f623cb46f500e205081203362ba8f0b275d0368c0ea74ac7d34d244"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.199762 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cba3-account-create-update-bph8b" event={"ID":"9af47917-824a-452b-b0db-03ad3f4861df","Type":"ContainerStarted","Data":"00c59b6d5b21a612fe5716f1f92a3adb28414e415b4e9b69ebd7595359e745c4"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.202140 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jbstq" event={"ID":"967b449a-1414-4a5c-b625-bcaf12b17ade","Type":"ContainerStarted","Data":"56c91e8dc88a7d8da4509ded8923f1a4a18a719a8066a712556a9928447a6799"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.217702 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-wnlhd" podStartSLOduration=2.217680681 podStartE2EDuration="2.217680681s" podCreationTimestamp="2026-02-02 10:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:09.213576004 +0000 UTC m=+1210.304977454" watchObservedRunningTime="2026-02-02 10:52:09.217680681 +0000 UTC m=+1210.309082141" Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.220695 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-8ggwt" event={"ID":"367466e2-34f1-4f2c-9e11-eb6c24c5318c","Type":"ContainerStarted","Data":"5317c6760f251bc9751fe010b7bce1cb08e32b0f7d5159c83d81107f454b9348"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.237692 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-edbb-account-create-update-gt7ll" event={"ID":"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e","Type":"ContainerStarted","Data":"53c8908fbcc0591fbfb69956d0c0c3109bf14ab8287ebd837545ea519acfd6d4"} Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.278589 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-8ggwt" podStartSLOduration=2.27856972 podStartE2EDuration="2.27856972s" podCreationTimestamp="2026-02-02 10:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:09.267650428 +0000 UTC m=+1210.359051878" watchObservedRunningTime="2026-02-02 10:52:09.27856972 +0000 UTC m=+1210.369971170" Feb 02 10:52:09 crc kubenswrapper[4845]: I0202 10:52:09.281608 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cba3-account-create-update-bph8b" podStartSLOduration=2.281583236 podStartE2EDuration="2.281583236s" podCreationTimestamp="2026-02-02 10:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:09.250655743 +0000 UTC m=+1210.342057193" watchObservedRunningTime="2026-02-02 10:52:09.281583236 +0000 UTC m=+1210.372984686" Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.287596 4845 generic.go:334] "Generic (PLEG): container finished" podID="37e0fd8e-0f85-48be-b690-c11e3c09f340" containerID="a8fcb11c46488e7ab4d44f9b73e21b0ab99aab04b639ff39da8c3dcc0a64fd01" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.288270 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a8c-account-create-update-qmrlh" event={"ID":"37e0fd8e-0f85-48be-b690-c11e3c09f340","Type":"ContainerDied","Data":"a8fcb11c46488e7ab4d44f9b73e21b0ab99aab04b639ff39da8c3dcc0a64fd01"} Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.297433 4845 generic.go:334] "Generic (PLEG): container finished" podID="367466e2-34f1-4f2c-9e11-eb6c24c5318c" containerID="51bb841af27a85ca68abff1f32dcaf10c9ab7f03618a7c256ab9040498ec70ed" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.298660 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-8ggwt" event={"ID":"367466e2-34f1-4f2c-9e11-eb6c24c5318c","Type":"ContainerDied","Data":"51bb841af27a85ca68abff1f32dcaf10c9ab7f03618a7c256ab9040498ec70ed"} Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.302131 4845 generic.go:334] "Generic (PLEG): container finished" podID="45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e" containerID="b23d76a377763a9782e43474e58cb72f1e55efdf39ac9b7aaca3beca20c268f7" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.302193 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-edbb-account-create-update-gt7ll" event={"ID":"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e","Type":"ContainerDied","Data":"b23d76a377763a9782e43474e58cb72f1e55efdf39ac9b7aaca3beca20c268f7"} Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.303832 4845 generic.go:334] "Generic (PLEG): container finished" podID="efb890e0-ca91-4204-8e4b-9036a64e56e1" containerID="5c1dc639a0ba7e9ddef0cb628d1e688f84eee0dbcad460df6e423cdfb04749bd" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.303910 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ee84-account-create-update-f2n87" event={"ID":"efb890e0-ca91-4204-8e4b-9036a64e56e1","Type":"ContainerDied","Data":"5c1dc639a0ba7e9ddef0cb628d1e688f84eee0dbcad460df6e423cdfb04749bd"} Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.311018 4845 generic.go:334] "Generic (PLEG): container finished" podID="2cf66acf-0a94-4850-913b-711b19b88dd3" containerID="f8e4a4bf00801e300e8c97b01bb80d6e16ede05a4fe2abe38abfdf7564fa62f4" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.311233 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-phm7s" event={"ID":"2cf66acf-0a94-4850-913b-711b19b88dd3","Type":"ContainerDied","Data":"f8e4a4bf00801e300e8c97b01bb80d6e16ede05a4fe2abe38abfdf7564fa62f4"} Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.314538 4845 generic.go:334] "Generic (PLEG): container finished" podID="afec66f7-184b-44f1-a172-b1e78739309d" containerID="7d589ab274ae62a36101e94b25da0bcc5210eda8997714fcc496cd1866ddd622" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.314605 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wnlhd" event={"ID":"afec66f7-184b-44f1-a172-b1e78739309d","Type":"ContainerDied","Data":"7d589ab274ae62a36101e94b25da0bcc5210eda8997714fcc496cd1866ddd622"} Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.334763 4845 generic.go:334] "Generic (PLEG): container finished" podID="9af47917-824a-452b-b0db-03ad3f4861df" containerID="c026c97f3f623cb46f500e205081203362ba8f0b275d0368c0ea74ac7d34d244" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.334856 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cba3-account-create-update-bph8b" event={"ID":"9af47917-824a-452b-b0db-03ad3f4861df","Type":"ContainerDied","Data":"c026c97f3f623cb46f500e205081203362ba8f0b275d0368c0ea74ac7d34d244"} Feb 02 10:52:10 crc kubenswrapper[4845]: I0202 10:52:10.996767 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.080729 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f729k\" (UniqueName: \"kubernetes.io/projected/ad1fe923-0409-4c3c-869c-9d0c09a2506a-kube-api-access-f729k\") pod \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\" (UID: \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\") " Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.081129 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1fe923-0409-4c3c-869c-9d0c09a2506a-operator-scripts\") pod \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\" (UID: \"ad1fe923-0409-4c3c-869c-9d0c09a2506a\") " Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.082174 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad1fe923-0409-4c3c-869c-9d0c09a2506a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad1fe923-0409-4c3c-869c-9d0c09a2506a" (UID: "ad1fe923-0409-4c3c-869c-9d0c09a2506a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.088143 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad1fe923-0409-4c3c-869c-9d0c09a2506a-kube-api-access-f729k" (OuterVolumeSpecName: "kube-api-access-f729k") pod "ad1fe923-0409-4c3c-869c-9d0c09a2506a" (UID: "ad1fe923-0409-4c3c-869c-9d0c09a2506a"). InnerVolumeSpecName "kube-api-access-f729k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.183490 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1fe923-0409-4c3c-869c-9d0c09a2506a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.183528 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f729k\" (UniqueName: \"kubernetes.io/projected/ad1fe923-0409-4c3c-869c-9d0c09a2506a-kube-api-access-f729k\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.354986 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"74923277cb3713500b97089277d8f20c5a6f124a4bd6af7f533053a052e8bb3a"} Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.355256 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"405c922bf87f1bef0ae98c722bd475e41e4a19d9656c4a5e8d7d1ca39f678583"} Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.355267 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"24917214958f74c9af404d02d87263286e2f56621b9c258d51a89d82c92b4fc4"} Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.357708 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h6qld" event={"ID":"ad1fe923-0409-4c3c-869c-9d0c09a2506a","Type":"ContainerDied","Data":"e44bcf835547d78301411ca32122b909f9e1e17afa231378a1944362c26b5d4c"} Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.357747 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44bcf835547d78301411ca32122b909f9e1e17afa231378a1944362c26b5d4c" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.357764 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h6qld" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.748929 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.921746 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/367466e2-34f1-4f2c-9e11-eb6c24c5318c-operator-scripts\") pod \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\" (UID: \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\") " Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.921960 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g49zl\" (UniqueName: \"kubernetes.io/projected/367466e2-34f1-4f2c-9e11-eb6c24c5318c-kube-api-access-g49zl\") pod \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\" (UID: \"367466e2-34f1-4f2c-9e11-eb6c24c5318c\") " Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.922217 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/367466e2-34f1-4f2c-9e11-eb6c24c5318c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "367466e2-34f1-4f2c-9e11-eb6c24c5318c" (UID: "367466e2-34f1-4f2c-9e11-eb6c24c5318c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.923028 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/367466e2-34f1-4f2c-9e11-eb6c24c5318c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:11 crc kubenswrapper[4845]: I0202 10:52:11.934235 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367466e2-34f1-4f2c-9e11-eb6c24c5318c-kube-api-access-g49zl" (OuterVolumeSpecName: "kube-api-access-g49zl") pod "367466e2-34f1-4f2c-9e11-eb6c24c5318c" (UID: "367466e2-34f1-4f2c-9e11-eb6c24c5318c"). InnerVolumeSpecName "kube-api-access-g49zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.018474 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.025265 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g49zl\" (UniqueName: \"kubernetes.io/projected/367466e2-34f1-4f2c-9e11-eb6c24c5318c-kube-api-access-g49zl\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.027749 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.126319 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afec66f7-184b-44f1-a172-b1e78739309d-operator-scripts\") pod \"afec66f7-184b-44f1-a172-b1e78739309d\" (UID: \"afec66f7-184b-44f1-a172-b1e78739309d\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.126400 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrbxm\" (UniqueName: \"kubernetes.io/projected/efb890e0-ca91-4204-8e4b-9036a64e56e1-kube-api-access-vrbxm\") pod \"efb890e0-ca91-4204-8e4b-9036a64e56e1\" (UID: \"efb890e0-ca91-4204-8e4b-9036a64e56e1\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.126434 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s89jm\" (UniqueName: \"kubernetes.io/projected/afec66f7-184b-44f1-a172-b1e78739309d-kube-api-access-s89jm\") pod \"afec66f7-184b-44f1-a172-b1e78739309d\" (UID: \"afec66f7-184b-44f1-a172-b1e78739309d\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.126670 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb890e0-ca91-4204-8e4b-9036a64e56e1-operator-scripts\") pod \"efb890e0-ca91-4204-8e4b-9036a64e56e1\" (UID: \"efb890e0-ca91-4204-8e4b-9036a64e56e1\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.127621 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efb890e0-ca91-4204-8e4b-9036a64e56e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efb890e0-ca91-4204-8e4b-9036a64e56e1" (UID: "efb890e0-ca91-4204-8e4b-9036a64e56e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.128495 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afec66f7-184b-44f1-a172-b1e78739309d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afec66f7-184b-44f1-a172-b1e78739309d" (UID: "afec66f7-184b-44f1-a172-b1e78739309d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.133252 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afec66f7-184b-44f1-a172-b1e78739309d-kube-api-access-s89jm" (OuterVolumeSpecName: "kube-api-access-s89jm") pod "afec66f7-184b-44f1-a172-b1e78739309d" (UID: "afec66f7-184b-44f1-a172-b1e78739309d"). InnerVolumeSpecName "kube-api-access-s89jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.134261 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb890e0-ca91-4204-8e4b-9036a64e56e1-kube-api-access-vrbxm" (OuterVolumeSpecName: "kube-api-access-vrbxm") pod "efb890e0-ca91-4204-8e4b-9036a64e56e1" (UID: "efb890e0-ca91-4204-8e4b-9036a64e56e1"). InnerVolumeSpecName "kube-api-access-vrbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.230259 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afec66f7-184b-44f1-a172-b1e78739309d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.230311 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrbxm\" (UniqueName: \"kubernetes.io/projected/efb890e0-ca91-4204-8e4b-9036a64e56e1-kube-api-access-vrbxm\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.230328 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s89jm\" (UniqueName: \"kubernetes.io/projected/afec66f7-184b-44f1-a172-b1e78739309d-kube-api-access-s89jm\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.230344 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb890e0-ca91-4204-8e4b-9036a64e56e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.339087 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.363085 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.369861 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.390984 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ee84-account-create-update-f2n87" event={"ID":"efb890e0-ca91-4204-8e4b-9036a64e56e1","Type":"ContainerDied","Data":"56770c5d385d67ae06726838ff4816cd3884482b8958ac9f7af99eb30324dfed"} Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.391033 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56770c5d385d67ae06726838ff4816cd3884482b8958ac9f7af99eb30324dfed" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.391110 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ee84-account-create-update-f2n87" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.403270 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-phm7s" event={"ID":"2cf66acf-0a94-4850-913b-711b19b88dd3","Type":"ContainerDied","Data":"8a50e4e7b2abb555310e50a778920bc7b8f7c931704bcd0016e368041f8db92c"} Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.403319 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a50e4e7b2abb555310e50a778920bc7b8f7c931704bcd0016e368041f8db92c" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.403393 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-phm7s" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.410302 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wnlhd" event={"ID":"afec66f7-184b-44f1-a172-b1e78739309d","Type":"ContainerDied","Data":"021b0148be56b73890ed92473db16080cb897f269c2e554a1690906e496aa7cc"} Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.410359 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="021b0148be56b73890ed92473db16080cb897f269c2e554a1690906e496aa7cc" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.410438 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wnlhd" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.420037 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"d1a41540df997fa052831691a5536c71283d086d443f0639d3997a30859d370e"} Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.422090 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a8c-account-create-update-qmrlh" event={"ID":"37e0fd8e-0f85-48be-b690-c11e3c09f340","Type":"ContainerDied","Data":"e0b6da0abcbdb37a5cad2eade5c52fcd8029eb38e9f1aa7e99827abac29549bb"} Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.422143 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0b6da0abcbdb37a5cad2eade5c52fcd8029eb38e9f1aa7e99827abac29549bb" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.422107 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a8c-account-create-update-qmrlh" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.423310 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-8ggwt" event={"ID":"367466e2-34f1-4f2c-9e11-eb6c24c5318c","Type":"ContainerDied","Data":"5317c6760f251bc9751fe010b7bce1cb08e32b0f7d5159c83d81107f454b9348"} Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.423341 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5317c6760f251bc9751fe010b7bce1cb08e32b0f7d5159c83d81107f454b9348" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.423368 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-8ggwt" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.427635 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-edbb-account-create-update-gt7ll" event={"ID":"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e","Type":"ContainerDied","Data":"53c8908fbcc0591fbfb69956d0c0c3109bf14ab8287ebd837545ea519acfd6d4"} Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.427679 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53c8908fbcc0591fbfb69956d0c0c3109bf14ab8287ebd837545ea519acfd6d4" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.427732 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-edbb-account-create-update-gt7ll" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.433978 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-operator-scripts\") pod \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\" (UID: \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.434232 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thsbw\" (UniqueName: \"kubernetes.io/projected/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-kube-api-access-thsbw\") pod \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\" (UID: \"45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.434684 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e" (UID: "45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.435051 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.439566 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-kube-api-access-thsbw" (OuterVolumeSpecName: "kube-api-access-thsbw") pod "45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e" (UID: "45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e"). InnerVolumeSpecName "kube-api-access-thsbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.536497 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wmdn\" (UniqueName: \"kubernetes.io/projected/2cf66acf-0a94-4850-913b-711b19b88dd3-kube-api-access-9wmdn\") pod \"2cf66acf-0a94-4850-913b-711b19b88dd3\" (UID: \"2cf66acf-0a94-4850-913b-711b19b88dd3\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.536767 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf66acf-0a94-4850-913b-711b19b88dd3-operator-scripts\") pod \"2cf66acf-0a94-4850-913b-711b19b88dd3\" (UID: \"2cf66acf-0a94-4850-913b-711b19b88dd3\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.536836 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e0fd8e-0f85-48be-b690-c11e3c09f340-operator-scripts\") pod \"37e0fd8e-0f85-48be-b690-c11e3c09f340\" (UID: \"37e0fd8e-0f85-48be-b690-c11e3c09f340\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.537035 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql6tc\" (UniqueName: \"kubernetes.io/projected/37e0fd8e-0f85-48be-b690-c11e3c09f340-kube-api-access-ql6tc\") pod \"37e0fd8e-0f85-48be-b690-c11e3c09f340\" (UID: \"37e0fd8e-0f85-48be-b690-c11e3c09f340\") " Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.538108 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf66acf-0a94-4850-913b-711b19b88dd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cf66acf-0a94-4850-913b-711b19b88dd3" (UID: "2cf66acf-0a94-4850-913b-711b19b88dd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.538133 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e0fd8e-0f85-48be-b690-c11e3c09f340-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37e0fd8e-0f85-48be-b690-c11e3c09f340" (UID: "37e0fd8e-0f85-48be-b690-c11e3c09f340"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.538867 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e0fd8e-0f85-48be-b690-c11e3c09f340-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.538916 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thsbw\" (UniqueName: \"kubernetes.io/projected/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e-kube-api-access-thsbw\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.538932 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf66acf-0a94-4850-913b-711b19b88dd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.542077 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf66acf-0a94-4850-913b-711b19b88dd3-kube-api-access-9wmdn" (OuterVolumeSpecName: "kube-api-access-9wmdn") pod "2cf66acf-0a94-4850-913b-711b19b88dd3" (UID: "2cf66acf-0a94-4850-913b-711b19b88dd3"). InnerVolumeSpecName "kube-api-access-9wmdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.542132 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e0fd8e-0f85-48be-b690-c11e3c09f340-kube-api-access-ql6tc" (OuterVolumeSpecName: "kube-api-access-ql6tc") pod "37e0fd8e-0f85-48be-b690-c11e3c09f340" (UID: "37e0fd8e-0f85-48be-b690-c11e3c09f340"). InnerVolumeSpecName "kube-api-access-ql6tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.641154 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql6tc\" (UniqueName: \"kubernetes.io/projected/37e0fd8e-0f85-48be-b690-c11e3c09f340-kube-api-access-ql6tc\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4845]: I0202 10:52:12.641196 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wmdn\" (UniqueName: \"kubernetes.io/projected/2cf66acf-0a94-4850-913b-711b19b88dd3-kube-api-access-9wmdn\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:13 crc kubenswrapper[4845]: I0202 10:52:13.437736 4845 generic.go:334] "Generic (PLEG): container finished" podID="31859db3-3de0-46d0-a81b-b951f1d45279" containerID="fb8b139b6ecef2c8e8393a96d0038f98cb7a4d3638100daa0e3715cdd7f50c17" exitCode=0 Feb 02 10:52:13 crc kubenswrapper[4845]: I0202 10:52:13.438186 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31859db3-3de0-46d0-a81b-b951f1d45279","Type":"ContainerDied","Data":"fb8b139b6ecef2c8e8393a96d0038f98cb7a4d3638100daa0e3715cdd7f50c17"} Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.473010 4845 generic.go:334] "Generic (PLEG): container finished" podID="34877df4-b654-4e0c-ac67-da6fd95c249d" containerID="3a7ec3be2e02a83d1849c451b822789235edc5a4672079139f59894d6d036a70" exitCode=0 Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.473126 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kgn95" event={"ID":"34877df4-b654-4e0c-ac67-da6fd95c249d","Type":"ContainerDied","Data":"3a7ec3be2e02a83d1849c451b822789235edc5a4672079139f59894d6d036a70"} Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.478384 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cba3-account-create-update-bph8b" event={"ID":"9af47917-824a-452b-b0db-03ad3f4861df","Type":"ContainerDied","Data":"00c59b6d5b21a612fe5716f1f92a3adb28414e415b4e9b69ebd7595359e745c4"} Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.478413 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c59b6d5b21a612fe5716f1f92a3adb28414e415b4e9b69ebd7595359e745c4" Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.527671 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.631605 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af47917-824a-452b-b0db-03ad3f4861df-operator-scripts\") pod \"9af47917-824a-452b-b0db-03ad3f4861df\" (UID: \"9af47917-824a-452b-b0db-03ad3f4861df\") " Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.631903 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6csf\" (UniqueName: \"kubernetes.io/projected/9af47917-824a-452b-b0db-03ad3f4861df-kube-api-access-g6csf\") pod \"9af47917-824a-452b-b0db-03ad3f4861df\" (UID: \"9af47917-824a-452b-b0db-03ad3f4861df\") " Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.632321 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af47917-824a-452b-b0db-03ad3f4861df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9af47917-824a-452b-b0db-03ad3f4861df" (UID: "9af47917-824a-452b-b0db-03ad3f4861df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.632551 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af47917-824a-452b-b0db-03ad3f4861df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.642226 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af47917-824a-452b-b0db-03ad3f4861df-kube-api-access-g6csf" (OuterVolumeSpecName: "kube-api-access-g6csf") pod "9af47917-824a-452b-b0db-03ad3f4861df" (UID: "9af47917-824a-452b-b0db-03ad3f4861df"). InnerVolumeSpecName "kube-api-access-g6csf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:16 crc kubenswrapper[4845]: I0202 10:52:16.734035 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6csf\" (UniqueName: \"kubernetes.io/projected/9af47917-824a-452b-b0db-03ad3f4861df-kube-api-access-g6csf\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:17 crc kubenswrapper[4845]: I0202 10:52:17.489824 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jbstq" event={"ID":"967b449a-1414-4a5c-b625-bcaf12b17ade","Type":"ContainerStarted","Data":"493cf0ea40e1ecb4c2bc2c0fc9bcd32cc6e220ecfec73e51aec90faf9abebac3"} Feb 02 10:52:17 crc kubenswrapper[4845]: I0202 10:52:17.493382 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31859db3-3de0-46d0-a81b-b951f1d45279","Type":"ContainerStarted","Data":"bd655aec5adcf4557d3cd7bb2d2b2176da4d03f540d31774ecb69beed6ccf9fd"} Feb 02 10:52:17 crc kubenswrapper[4845]: I0202 10:52:17.509807 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-jbstq" podStartSLOduration=2.293468026 podStartE2EDuration="10.509792149s" podCreationTimestamp="2026-02-02 10:52:07 +0000 UTC" firstStartedPulling="2026-02-02 10:52:08.641943174 +0000 UTC m=+1209.733344624" lastFinishedPulling="2026-02-02 10:52:16.858267297 +0000 UTC m=+1217.949668747" observedRunningTime="2026-02-02 10:52:17.506761193 +0000 UTC m=+1218.598162633" watchObservedRunningTime="2026-02-02 10:52:17.509792149 +0000 UTC m=+1218.601193599" Feb 02 10:52:17 crc kubenswrapper[4845]: I0202 10:52:17.511105 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cba3-account-create-update-bph8b" Feb 02 10:52:17 crc kubenswrapper[4845]: I0202 10:52:17.517013 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"ab146fc1a35a7273f26471788eb3651c45f72d8d7bced303478fe4e7b486106b"} Feb 02 10:52:17 crc kubenswrapper[4845]: I0202 10:52:17.517069 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"81be3a13f800d5dc0821464060f981d73bf8707d121acef9872d7cbd7e764eac"} Feb 02 10:52:17 crc kubenswrapper[4845]: I0202 10:52:17.517083 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"f18e574b0772d18c72da2365bbb00a93f5efd2703c044c947bd63bedf35aaa7e"} Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.173557 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kgn95" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.270274 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-combined-ca-bundle\") pod \"34877df4-b654-4e0c-ac67-da6fd95c249d\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.270340 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkc25\" (UniqueName: \"kubernetes.io/projected/34877df4-b654-4e0c-ac67-da6fd95c249d-kube-api-access-rkc25\") pod \"34877df4-b654-4e0c-ac67-da6fd95c249d\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.270431 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-config-data\") pod \"34877df4-b654-4e0c-ac67-da6fd95c249d\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.270583 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-db-sync-config-data\") pod \"34877df4-b654-4e0c-ac67-da6fd95c249d\" (UID: \"34877df4-b654-4e0c-ac67-da6fd95c249d\") " Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.274568 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "34877df4-b654-4e0c-ac67-da6fd95c249d" (UID: "34877df4-b654-4e0c-ac67-da6fd95c249d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.276918 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34877df4-b654-4e0c-ac67-da6fd95c249d-kube-api-access-rkc25" (OuterVolumeSpecName: "kube-api-access-rkc25") pod "34877df4-b654-4e0c-ac67-da6fd95c249d" (UID: "34877df4-b654-4e0c-ac67-da6fd95c249d"). InnerVolumeSpecName "kube-api-access-rkc25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.321078 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34877df4-b654-4e0c-ac67-da6fd95c249d" (UID: "34877df4-b654-4e0c-ac67-da6fd95c249d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.332272 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-config-data" (OuterVolumeSpecName: "config-data") pod "34877df4-b654-4e0c-ac67-da6fd95c249d" (UID: "34877df4-b654-4e0c-ac67-da6fd95c249d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.374368 4845 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.374498 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.374510 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkc25\" (UniqueName: \"kubernetes.io/projected/34877df4-b654-4e0c-ac67-da6fd95c249d-kube-api-access-rkc25\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.374521 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34877df4-b654-4e0c-ac67-da6fd95c249d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.523431 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kgn95" event={"ID":"34877df4-b654-4e0c-ac67-da6fd95c249d","Type":"ContainerDied","Data":"46b3e4f9d3e3e603b74f4066b84d76558af5b054c23ef7c7b9de906dee295d9c"} Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.523507 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b3e4f9d3e3e603b74f4066b84d76558af5b054c23ef7c7b9de906dee295d9c" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.523468 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kgn95" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.530069 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"70063302b184916c235c5fd4a1532435663c441d53879f700c408590babab027"} Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.530403 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"bc9250ab28e8c7cbc5470cf90b33e00f598a4fe8e78e976f916ac1aaa79460c6"} Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.903957 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-g827z"] Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.909792 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb890e0-ca91-4204-8e4b-9036a64e56e1" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.909827 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb890e0-ca91-4204-8e4b-9036a64e56e1" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.909841 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af47917-824a-452b-b0db-03ad3f4861df" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.909850 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af47917-824a-452b-b0db-03ad3f4861df" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.909857 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf66acf-0a94-4850-913b-711b19b88dd3" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.909864 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf66acf-0a94-4850-913b-711b19b88dd3" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.909907 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34877df4-b654-4e0c-ac67-da6fd95c249d" containerName="glance-db-sync" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.909914 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="34877df4-b654-4e0c-ac67-da6fd95c249d" containerName="glance-db-sync" Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.909924 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e0fd8e-0f85-48be-b690-c11e3c09f340" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.909930 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e0fd8e-0f85-48be-b690-c11e3c09f340" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.909943 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1fe923-0409-4c3c-869c-9d0c09a2506a" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.909949 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1fe923-0409-4c3c-869c-9d0c09a2506a" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.909964 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367466e2-34f1-4f2c-9e11-eb6c24c5318c" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.909969 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="367466e2-34f1-4f2c-9e11-eb6c24c5318c" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.909988 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afec66f7-184b-44f1-a172-b1e78739309d" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.909996 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="afec66f7-184b-44f1-a172-b1e78739309d" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: E0202 10:52:18.910007 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910013 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910315 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af47917-824a-452b-b0db-03ad3f4861df" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910331 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb890e0-ca91-4204-8e4b-9036a64e56e1" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910349 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="34877df4-b654-4e0c-ac67-da6fd95c249d" containerName="glance-db-sync" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910359 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1fe923-0409-4c3c-869c-9d0c09a2506a" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910373 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910380 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf66acf-0a94-4850-913b-711b19b88dd3" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910390 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="367466e2-34f1-4f2c-9e11-eb6c24c5318c" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910398 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e0fd8e-0f85-48be-b690-c11e3c09f340" containerName="mariadb-account-create-update" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.910409 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="afec66f7-184b-44f1-a172-b1e78739309d" containerName="mariadb-database-create" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.911487 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.931252 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-g827z"] Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.997106 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-config\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.997182 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.997219 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhwv\" (UniqueName: \"kubernetes.io/projected/e0a38136-159d-482a-988e-07f3b77fdbb4-kube-api-access-vwhwv\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.997255 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:18 crc kubenswrapper[4845]: I0202 10:52:18.997452 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.099828 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhwv\" (UniqueName: \"kubernetes.io/projected/e0a38136-159d-482a-988e-07f3b77fdbb4-kube-api-access-vwhwv\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.099933 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.100050 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.100137 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-config\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.100212 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.101280 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.103206 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-config\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.103845 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.104656 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.132023 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhwv\" (UniqueName: \"kubernetes.io/projected/e0a38136-159d-482a-988e-07f3b77fdbb4-kube-api-access-vwhwv\") pod \"dnsmasq-dns-5b946c75cc-g827z\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.268911 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.567671 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"6e450f575a10ed5eb53c16a2f8cfb924b06dfa9ed4395c0b603f21cf26457698"} Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.567940 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d6db6e42-984a-484b-9f90-e6efa9817f37","Type":"ContainerStarted","Data":"02c633fbb7c37eada8bc13c59416fc56c82d489fed906a675bc9ca4e06f4e1dc"} Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.653812 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.27389171 podStartE2EDuration="50.653787475s" podCreationTimestamp="2026-02-02 10:51:29 +0000 UTC" firstStartedPulling="2026-02-02 10:52:03.477064574 +0000 UTC m=+1204.568466024" lastFinishedPulling="2026-02-02 10:52:16.856960339 +0000 UTC m=+1217.948361789" observedRunningTime="2026-02-02 10:52:19.64206258 +0000 UTC m=+1220.733464020" watchObservedRunningTime="2026-02-02 10:52:19.653787475 +0000 UTC m=+1220.745188925" Feb 02 10:52:19 crc kubenswrapper[4845]: W0202 10:52:19.821158 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0a38136_159d_482a_988e_07f3b77fdbb4.slice/crio-a82710ddcd829b8f907c2ebc80d76437306ca2f6502c542d653b11393fa4153c WatchSource:0}: Error finding container a82710ddcd829b8f907c2ebc80d76437306ca2f6502c542d653b11393fa4153c: Status 404 returned error can't find the container with id a82710ddcd829b8f907c2ebc80d76437306ca2f6502c542d653b11393fa4153c Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.821404 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-g827z"] Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.916612 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-g827z"] Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.958486 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5r95"] Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.961459 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.967328 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 02 10:52:19 crc kubenswrapper[4845]: I0202 10:52:19.992034 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5r95"] Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.031869 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.032002 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.032038 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.032110 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-config\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.032133 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdm58\" (UniqueName: \"kubernetes.io/projected/cc6155c0-72ff-4f97-9748-716e3df8ad88-kube-api-access-hdm58\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.032292 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.137254 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.137330 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.137360 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.137412 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-config\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.137439 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdm58\" (UniqueName: \"kubernetes.io/projected/cc6155c0-72ff-4f97-9748-716e3df8ad88-kube-api-access-hdm58\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.137542 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.138366 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.138411 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.139084 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.139143 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-config\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.139774 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.196003 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdm58\" (UniqueName: \"kubernetes.io/projected/cc6155c0-72ff-4f97-9748-716e3df8ad88-kube-api-access-hdm58\") pod \"dnsmasq-dns-74f6bcbc87-g5r95\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.218921 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.582578 4845 generic.go:334] "Generic (PLEG): container finished" podID="e0a38136-159d-482a-988e-07f3b77fdbb4" containerID="f21c9d9931f7cb69bc5973a19397e3c42576eb648cfa1130b5861a6b72c96acd" exitCode=0 Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.582770 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" event={"ID":"e0a38136-159d-482a-988e-07f3b77fdbb4","Type":"ContainerDied","Data":"f21c9d9931f7cb69bc5973a19397e3c42576eb648cfa1130b5861a6b72c96acd"} Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.582970 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" event={"ID":"e0a38136-159d-482a-988e-07f3b77fdbb4","Type":"ContainerStarted","Data":"a82710ddcd829b8f907c2ebc80d76437306ca2f6502c542d653b11393fa4153c"} Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.593321 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31859db3-3de0-46d0-a81b-b951f1d45279","Type":"ContainerStarted","Data":"becbf6e82dc65913f8e07cd6976d63486bd12ef17a78f01ee8609cf3cf55427b"} Feb 02 10:52:20 crc kubenswrapper[4845]: I0202 10:52:20.872284 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5r95"] Feb 02 10:52:20 crc kubenswrapper[4845]: W0202 10:52:20.880041 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc6155c0_72ff_4f97_9748_716e3df8ad88.slice/crio-4cc1cd6ad5cc574e57eca193db9f213196ac2057c994f0da0d19824fee8e97c1 WatchSource:0}: Error finding container 4cc1cd6ad5cc574e57eca193db9f213196ac2057c994f0da0d19824fee8e97c1: Status 404 returned error can't find the container with id 4cc1cd6ad5cc574e57eca193db9f213196ac2057c994f0da0d19824fee8e97c1 Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.071983 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.165386 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-sb\") pod \"e0a38136-159d-482a-988e-07f3b77fdbb4\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.165478 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-config\") pod \"e0a38136-159d-482a-988e-07f3b77fdbb4\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.165551 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-nb\") pod \"e0a38136-159d-482a-988e-07f3b77fdbb4\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.165672 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-dns-svc\") pod \"e0a38136-159d-482a-988e-07f3b77fdbb4\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.165712 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwhwv\" (UniqueName: \"kubernetes.io/projected/e0a38136-159d-482a-988e-07f3b77fdbb4-kube-api-access-vwhwv\") pod \"e0a38136-159d-482a-988e-07f3b77fdbb4\" (UID: \"e0a38136-159d-482a-988e-07f3b77fdbb4\") " Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.171073 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a38136-159d-482a-988e-07f3b77fdbb4-kube-api-access-vwhwv" (OuterVolumeSpecName: "kube-api-access-vwhwv") pod "e0a38136-159d-482a-988e-07f3b77fdbb4" (UID: "e0a38136-159d-482a-988e-07f3b77fdbb4"). InnerVolumeSpecName "kube-api-access-vwhwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.194562 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0a38136-159d-482a-988e-07f3b77fdbb4" (UID: "e0a38136-159d-482a-988e-07f3b77fdbb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.195201 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0a38136-159d-482a-988e-07f3b77fdbb4" (UID: "e0a38136-159d-482a-988e-07f3b77fdbb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.196637 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-config" (OuterVolumeSpecName: "config") pod "e0a38136-159d-482a-988e-07f3b77fdbb4" (UID: "e0a38136-159d-482a-988e-07f3b77fdbb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.198198 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0a38136-159d-482a-988e-07f3b77fdbb4" (UID: "e0a38136-159d-482a-988e-07f3b77fdbb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.269192 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.269244 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwhwv\" (UniqueName: \"kubernetes.io/projected/e0a38136-159d-482a-988e-07f3b77fdbb4-kube-api-access-vwhwv\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.269259 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.269273 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.269284 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a38136-159d-482a-988e-07f3b77fdbb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.603127 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.603165 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" event={"ID":"e0a38136-159d-482a-988e-07f3b77fdbb4","Type":"ContainerDied","Data":"a82710ddcd829b8f907c2ebc80d76437306ca2f6502c542d653b11393fa4153c"} Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.604160 4845 scope.go:117] "RemoveContainer" containerID="f21c9d9931f7cb69bc5973a19397e3c42576eb648cfa1130b5861a6b72c96acd" Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.611700 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"31859db3-3de0-46d0-a81b-b951f1d45279","Type":"ContainerStarted","Data":"b0b33f8b69d3389ea6162be514e962a2eb636734d57bb003836896c3d935c8e1"} Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.615798 4845 generic.go:334] "Generic (PLEG): container finished" podID="cc6155c0-72ff-4f97-9748-716e3df8ad88" containerID="3f008994368901472bc0803e0e3411d898bba47e6f98a3c829832d620028583b" exitCode=0 Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.615858 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" event={"ID":"cc6155c0-72ff-4f97-9748-716e3df8ad88","Type":"ContainerDied","Data":"3f008994368901472bc0803e0e3411d898bba47e6f98a3c829832d620028583b"} Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.615905 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" event={"ID":"cc6155c0-72ff-4f97-9748-716e3df8ad88","Type":"ContainerStarted","Data":"4cc1cd6ad5cc574e57eca193db9f213196ac2057c994f0da0d19824fee8e97c1"} Feb 02 10:52:21 crc kubenswrapper[4845]: I0202 10:52:21.661905 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.66186759 podStartE2EDuration="19.66186759s" podCreationTimestamp="2026-02-02 10:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:21.652506863 +0000 UTC m=+1222.743908313" watchObservedRunningTime="2026-02-02 10:52:21.66186759 +0000 UTC m=+1222.753269040" Feb 02 10:52:22 crc kubenswrapper[4845]: I0202 10:52:22.627910 4845 generic.go:334] "Generic (PLEG): container finished" podID="967b449a-1414-4a5c-b625-bcaf12b17ade" containerID="493cf0ea40e1ecb4c2bc2c0fc9bcd32cc6e220ecfec73e51aec90faf9abebac3" exitCode=0 Feb 02 10:52:22 crc kubenswrapper[4845]: I0202 10:52:22.627993 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jbstq" event={"ID":"967b449a-1414-4a5c-b625-bcaf12b17ade","Type":"ContainerDied","Data":"493cf0ea40e1ecb4c2bc2c0fc9bcd32cc6e220ecfec73e51aec90faf9abebac3"} Feb 02 10:52:22 crc kubenswrapper[4845]: I0202 10:52:22.633426 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" event={"ID":"cc6155c0-72ff-4f97-9748-716e3df8ad88","Type":"ContainerStarted","Data":"2d2c3127f10566f1a93e004f0e822b849cbd5d19678d12b2cc9041ee23459162"} Feb 02 10:52:22 crc kubenswrapper[4845]: I0202 10:52:22.679249 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" podStartSLOduration=3.679229758 podStartE2EDuration="3.679229758s" podCreationTimestamp="2026-02-02 10:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:22.674864614 +0000 UTC m=+1223.766266054" watchObservedRunningTime="2026-02-02 10:52:22.679229758 +0000 UTC m=+1223.770631208" Feb 02 10:52:23 crc kubenswrapper[4845]: I0202 10:52:23.057471 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:23 crc kubenswrapper[4845]: I0202 10:52:23.644378 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.119316 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.247773 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n52gp\" (UniqueName: \"kubernetes.io/projected/967b449a-1414-4a5c-b625-bcaf12b17ade-kube-api-access-n52gp\") pod \"967b449a-1414-4a5c-b625-bcaf12b17ade\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.248135 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-combined-ca-bundle\") pod \"967b449a-1414-4a5c-b625-bcaf12b17ade\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.248170 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-config-data\") pod \"967b449a-1414-4a5c-b625-bcaf12b17ade\" (UID: \"967b449a-1414-4a5c-b625-bcaf12b17ade\") " Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.257228 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967b449a-1414-4a5c-b625-bcaf12b17ade-kube-api-access-n52gp" (OuterVolumeSpecName: "kube-api-access-n52gp") pod "967b449a-1414-4a5c-b625-bcaf12b17ade" (UID: "967b449a-1414-4a5c-b625-bcaf12b17ade"). InnerVolumeSpecName "kube-api-access-n52gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.305230 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "967b449a-1414-4a5c-b625-bcaf12b17ade" (UID: "967b449a-1414-4a5c-b625-bcaf12b17ade"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.311055 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-config-data" (OuterVolumeSpecName: "config-data") pod "967b449a-1414-4a5c-b625-bcaf12b17ade" (UID: "967b449a-1414-4a5c-b625-bcaf12b17ade"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.350406 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.350447 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967b449a-1414-4a5c-b625-bcaf12b17ade-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.350460 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n52gp\" (UniqueName: \"kubernetes.io/projected/967b449a-1414-4a5c-b625-bcaf12b17ade-kube-api-access-n52gp\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.654138 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jbstq" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.654127 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jbstq" event={"ID":"967b449a-1414-4a5c-b625-bcaf12b17ade","Type":"ContainerDied","Data":"56c91e8dc88a7d8da4509ded8923f1a4a18a719a8066a712556a9928447a6799"} Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.654295 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c91e8dc88a7d8da4509ded8923f1a4a18a719a8066a712556a9928447a6799" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.911615 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5r95"] Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.950701 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-v7wns"] Feb 02 10:52:24 crc kubenswrapper[4845]: E0202 10:52:24.951287 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967b449a-1414-4a5c-b625-bcaf12b17ade" containerName="keystone-db-sync" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.951309 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="967b449a-1414-4a5c-b625-bcaf12b17ade" containerName="keystone-db-sync" Feb 02 10:52:24 crc kubenswrapper[4845]: E0202 10:52:24.951325 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a38136-159d-482a-988e-07f3b77fdbb4" containerName="init" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.951334 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a38136-159d-482a-988e-07f3b77fdbb4" containerName="init" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.951607 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a38136-159d-482a-988e-07f3b77fdbb4" containerName="init" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.951630 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="967b449a-1414-4a5c-b625-bcaf12b17ade" containerName="keystone-db-sync" Feb 02 10:52:24 crc kubenswrapper[4845]: I0202 10:52:24.962381 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.017855 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-glpvf"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.025589 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.049759 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.050081 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.050275 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4r6h5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.050498 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.071135 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.134873 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-v7wns"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225577 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-config-data\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225662 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgg4z\" (UniqueName: \"kubernetes.io/projected/4dc8c937-0b9b-461a-be1b-02bdb587b70e-kube-api-access-wgg4z\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225694 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-config\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225709 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225733 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225750 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-combined-ca-bundle\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225767 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-credential-keys\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225802 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225833 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225898 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-fernet-keys\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225915 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-scripts\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.225932 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sgx4\" (UniqueName: \"kubernetes.io/projected/250ffbd9-33d6-4a0d-b812-1d092341d4f9-kube-api-access-9sgx4\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.232111 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-glpvf"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.283586 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-kxrm5"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.284989 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.317374 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-kxrm5"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.321310 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.321533 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-2czql" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327609 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327663 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327722 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-fernet-keys\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327744 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-scripts\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327761 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sgx4\" (UniqueName: \"kubernetes.io/projected/250ffbd9-33d6-4a0d-b812-1d092341d4f9-kube-api-access-9sgx4\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327798 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-config-data\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327817 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjs7z\" (UniqueName: \"kubernetes.io/projected/250e18d9-cb14-4309-8d0c-fb341511dba6-kube-api-access-qjs7z\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327850 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-combined-ca-bundle\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.327993 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-config-data\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.328022 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgg4z\" (UniqueName: \"kubernetes.io/projected/4dc8c937-0b9b-461a-be1b-02bdb587b70e-kube-api-access-wgg4z\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.328054 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-config\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.328073 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.328098 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.328120 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-combined-ca-bundle\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.328137 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-credential-keys\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.335688 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cpdt4"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.338231 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.341194 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.341791 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.344969 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-config\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.345428 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.346040 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.353278 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cpdt4"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.360847 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tjnkn" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.361068 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.361199 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.372347 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-credential-keys\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.372436 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-config-data\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.375758 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-combined-ca-bundle\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.377541 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-fernet-keys\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.378961 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-scripts\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.432933 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sgx4\" (UniqueName: \"kubernetes.io/projected/250ffbd9-33d6-4a0d-b812-1d092341d4f9-kube-api-access-9sgx4\") pod \"keystone-bootstrap-glpvf\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.435013 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-combined-ca-bundle\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.435087 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-config-data\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.435330 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjs7z\" (UniqueName: \"kubernetes.io/projected/250e18d9-cb14-4309-8d0c-fb341511dba6-kube-api-access-qjs7z\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.436521 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgg4z\" (UniqueName: \"kubernetes.io/projected/4dc8c937-0b9b-461a-be1b-02bdb587b70e-kube-api-access-wgg4z\") pod \"dnsmasq-dns-847c4cc679-v7wns\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.445631 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-combined-ca-bundle\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.454010 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-config-data\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.525809 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjs7z\" (UniqueName: \"kubernetes.io/projected/250e18d9-cb14-4309-8d0c-fb341511dba6-kube-api-access-qjs7z\") pod \"heat-db-sync-kxrm5\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.547535 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-config\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.547602 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mpk6\" (UniqueName: \"kubernetes.io/projected/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-kube-api-access-9mpk6\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.547672 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-combined-ca-bundle\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.612510 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-g8b4r"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.615555 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.623960 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kxrm5" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.642043 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-v7wns"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.643083 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.651590 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rflw4\" (UniqueName: \"kubernetes.io/projected/183b0ef9-490f-43a1-a464-2bd64a820ebd-kube-api-access-rflw4\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.651672 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-combined-ca-bundle\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.651706 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-config-data\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.651751 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-config\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.651775 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-scripts\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.651812 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpk6\" (UniqueName: \"kubernetes.io/projected/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-kube-api-access-9mpk6\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.651866 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-db-sync-config-data\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.651973 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-combined-ca-bundle\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.652079 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/183b0ef9-490f-43a1-a464-2bd64a820ebd-etc-machine-id\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.658740 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2glkz" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.658999 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.659171 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.670009 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-config\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.678461 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-combined-ca-bundle\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.690071 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-g8b4r"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.704252 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" podUID="cc6155c0-72ff-4f97-9748-716e3df8ad88" containerName="dnsmasq-dns" containerID="cri-o://2d2c3127f10566f1a93e004f0e822b849cbd5d19678d12b2cc9041ee23459162" gracePeriod=10 Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.709114 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mpk6\" (UniqueName: \"kubernetes.io/projected/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-kube-api-access-9mpk6\") pod \"neutron-db-sync-cpdt4\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.712587 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.754634 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-db-sync-config-data\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.756656 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/183b0ef9-490f-43a1-a464-2bd64a820ebd-etc-machine-id\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.756785 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rflw4\" (UniqueName: \"kubernetes.io/projected/183b0ef9-490f-43a1-a464-2bd64a820ebd-kube-api-access-rflw4\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.756852 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-combined-ca-bundle\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.756894 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-config-data\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.756942 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-scripts\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.760254 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/183b0ef9-490f-43a1-a464-2bd64a820ebd-etc-machine-id\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.785429 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-scripts\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.786539 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-combined-ca-bundle\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.802331 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-config-data\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.802456 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gxjbc"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.804059 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.805432 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rflw4\" (UniqueName: \"kubernetes.io/projected/183b0ef9-490f-43a1-a464-2bd64a820ebd-kube-api-access-rflw4\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.818418 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-db-sync-config-data\") pod \"cinder-db-sync-g8b4r\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.843714 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kzmx2" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.843946 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.866383 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gxjbc"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.870821 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-combined-ca-bundle\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.870872 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-config-data\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.870946 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-scripts\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.871002 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4fj\" (UniqueName: \"kubernetes.io/projected/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-kube-api-access-vn4fj\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.871032 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-logs\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.872210 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.900521 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sbm2k"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.902851 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.919938 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.920748 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-hft5g"] Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.925798 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.936997 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.937228 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rddwg" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.947954 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.976724 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfj8\" (UniqueName: \"kubernetes.io/projected/9868fb5b-b18e-42b0-8532-6e6a55da71d2-kube-api-access-cjfj8\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.976784 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.976809 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-combined-ca-bundle\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.976839 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn4fj\" (UniqueName: \"kubernetes.io/projected/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-kube-api-access-vn4fj\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.976872 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-logs\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.976984 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977052 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-combined-ca-bundle\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977095 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-config-data\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977120 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977142 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-db-sync-config-data\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977168 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcx2l\" (UniqueName: \"kubernetes.io/projected/08802fb3-9897-4819-a38b-fe13e8892b47-kube-api-access-gcx2l\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977195 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977229 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-scripts\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977243 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-config\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.977504 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-logs\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.983836 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-combined-ca-bundle\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:25 crc kubenswrapper[4845]: I0202 10:52:25.991799 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-config-data\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:25.999579 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hft5g"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.012464 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-scripts\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.028407 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn4fj\" (UniqueName: \"kubernetes.io/projected/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-kube-api-access-vn4fj\") pod \"placement-db-sync-gxjbc\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.077526 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sbm2k"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079096 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-config\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079173 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfj8\" (UniqueName: \"kubernetes.io/projected/9868fb5b-b18e-42b0-8532-6e6a55da71d2-kube-api-access-cjfj8\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079222 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079257 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-combined-ca-bundle\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079326 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079426 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079455 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-db-sync-config-data\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079488 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcx2l\" (UniqueName: \"kubernetes.io/projected/08802fb3-9897-4819-a38b-fe13e8892b47-kube-api-access-gcx2l\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.079515 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.081519 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-config\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.085237 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.086007 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.086751 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.087801 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.099558 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-combined-ca-bundle\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.104831 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-db-sync-config-data\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.135961 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.138240 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcx2l\" (UniqueName: \"kubernetes.io/projected/08802fb3-9897-4819-a38b-fe13e8892b47-kube-api-access-gcx2l\") pod \"dnsmasq-dns-785d8bcb8c-sbm2k\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.138975 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.143959 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.144391 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.157825 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.162586 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfj8\" (UniqueName: \"kubernetes.io/projected/9868fb5b-b18e-42b0-8532-6e6a55da71d2-kube-api-access-cjfj8\") pod \"barbican-db-sync-hft5g\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.201045 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gxjbc" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.246435 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.282160 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hft5g" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.299519 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-log-httpd\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.299711 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-scripts\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.299752 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-config-data\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.299791 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g7v5\" (UniqueName: \"kubernetes.io/projected/ba91fb37-4550-4684-99bb-45dba169a879-kube-api-access-4g7v5\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.299969 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.302692 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-run-httpd\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.302758 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.351874 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.354954 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.361521 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.364482 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.364874 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.365176 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-snsd2" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.365640 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.404479 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-run-httpd\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.404525 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.404573 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-log-httpd\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.404639 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-scripts\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.404658 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-config-data\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.404677 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g7v5\" (UniqueName: \"kubernetes.io/projected/ba91fb37-4550-4684-99bb-45dba169a879-kube-api-access-4g7v5\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.404744 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.412530 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-log-httpd\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.412863 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-run-httpd\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.417570 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-config-data\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.419362 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.433360 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-scripts\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.444063 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.447004 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g7v5\" (UniqueName: \"kubernetes.io/projected/ba91fb37-4550-4684-99bb-45dba169a879-kube-api-access-4g7v5\") pod \"ceilometer-0\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.503428 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.508376 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.508429 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.508506 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.508539 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.508571 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.508645 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4hxp\" (UniqueName: \"kubernetes.io/projected/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-kube-api-access-d4hxp\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.508678 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-logs\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.508741 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.515900 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.527851 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.538645 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.541079 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.559186 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623077 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623152 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623217 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623288 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623322 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623382 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623415 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623436 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623469 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623501 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623554 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcf6\" (UniqueName: \"kubernetes.io/projected/f7543ae2-53fd-42d7-971f-a09923f10187-kube-api-access-xxcf6\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623577 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623595 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623646 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4hxp\" (UniqueName: \"kubernetes.io/projected/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-kube-api-access-d4hxp\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623678 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-logs\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.623706 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.628282 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-logs\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.629021 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.632206 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.632537 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.641834 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.656332 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.656383 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ed60d0e4f5ad2fb51e67cadb4519054184ad51c31b402d173121c9411d32387/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.657354 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.658592 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4hxp\" (UniqueName: \"kubernetes.io/projected/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-kube-api-access-d4hxp\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.727813 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.727876 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.727978 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcf6\" (UniqueName: \"kubernetes.io/projected/f7543ae2-53fd-42d7-971f-a09923f10187-kube-api-access-xxcf6\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.728010 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.728033 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.728095 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.728171 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.728217 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.735103 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.735360 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.739718 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.739757 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2a65ad887e305c92ac1c8235cc9c5fc327f1ea7ce91b9974356e11ee00bc2f81/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.740098 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.744008 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.747220 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.753853 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.779874 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcf6\" (UniqueName: \"kubernetes.io/projected/f7543ae2-53fd-42d7-971f-a09923f10187-kube-api-access-xxcf6\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.797267 4845 generic.go:334] "Generic (PLEG): container finished" podID="cc6155c0-72ff-4f97-9748-716e3df8ad88" containerID="2d2c3127f10566f1a93e004f0e822b849cbd5d19678d12b2cc9041ee23459162" exitCode=0 Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.797342 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" event={"ID":"cc6155c0-72ff-4f97-9748-716e3df8ad88","Type":"ContainerDied","Data":"2d2c3127f10566f1a93e004f0e822b849cbd5d19678d12b2cc9041ee23459162"} Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.840146 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-v7wns"] Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.889231 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.896059 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:26 crc kubenswrapper[4845]: I0202 10:52:26.923428 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.050626 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-config\") pod \"cc6155c0-72ff-4f97-9748-716e3df8ad88\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.051000 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-nb\") pod \"cc6155c0-72ff-4f97-9748-716e3df8ad88\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.051039 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-swift-storage-0\") pod \"cc6155c0-72ff-4f97-9748-716e3df8ad88\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.051532 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-sb\") pod \"cc6155c0-72ff-4f97-9748-716e3df8ad88\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.051786 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdm58\" (UniqueName: \"kubernetes.io/projected/cc6155c0-72ff-4f97-9748-716e3df8ad88-kube-api-access-hdm58\") pod \"cc6155c0-72ff-4f97-9748-716e3df8ad88\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.051850 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-svc\") pod \"cc6155c0-72ff-4f97-9748-716e3df8ad88\" (UID: \"cc6155c0-72ff-4f97-9748-716e3df8ad88\") " Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.064989 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6155c0-72ff-4f97-9748-716e3df8ad88-kube-api-access-hdm58" (OuterVolumeSpecName: "kube-api-access-hdm58") pod "cc6155c0-72ff-4f97-9748-716e3df8ad88" (UID: "cc6155c0-72ff-4f97-9748-716e3df8ad88"). InnerVolumeSpecName "kube-api-access-hdm58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.154003 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.160328 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdm58\" (UniqueName: \"kubernetes.io/projected/cc6155c0-72ff-4f97-9748-716e3df8ad88-kube-api-access-hdm58\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.177777 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.200550 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc6155c0-72ff-4f97-9748-716e3df8ad88" (UID: "cc6155c0-72ff-4f97-9748-716e3df8ad88"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.206303 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc6155c0-72ff-4f97-9748-716e3df8ad88" (UID: "cc6155c0-72ff-4f97-9748-716e3df8ad88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.218850 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc6155c0-72ff-4f97-9748-716e3df8ad88" (UID: "cc6155c0-72ff-4f97-9748-716e3df8ad88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.219537 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc6155c0-72ff-4f97-9748-716e3df8ad88" (UID: "cc6155c0-72ff-4f97-9748-716e3df8ad88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.241085 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-kxrm5"] Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.246001 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-config" (OuterVolumeSpecName: "config") pod "cc6155c0-72ff-4f97-9748-716e3df8ad88" (UID: "cc6155c0-72ff-4f97-9748-716e3df8ad88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.263716 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-glpvf"] Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.264131 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.264156 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.264192 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.264202 4845 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.264216 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6155c0-72ff-4f97-9748-716e3df8ad88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:27 crc kubenswrapper[4845]: W0202 10:52:27.287029 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod250e18d9_cb14_4309_8d0c_fb341511dba6.slice/crio-17906d788647666b2cd6e069e9338a17bf04443c927ea4ca63e22baf04dbc8dc WatchSource:0}: Error finding container 17906d788647666b2cd6e069e9338a17bf04443c927ea4ca63e22baf04dbc8dc: Status 404 returned error can't find the container with id 17906d788647666b2cd6e069e9338a17bf04443c927ea4ca63e22baf04dbc8dc Feb 02 10:52:27 crc kubenswrapper[4845]: W0202 10:52:27.290535 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod250ffbd9_33d6_4a0d_b812_1d092341d4f9.slice/crio-de3e03d863b0420869eef2e3a7c17fb4d5f1b982bfb909ef8098e97f81653921 WatchSource:0}: Error finding container de3e03d863b0420869eef2e3a7c17fb4d5f1b982bfb909ef8098e97f81653921: Status 404 returned error can't find the container with id de3e03d863b0420869eef2e3a7c17fb4d5f1b982bfb909ef8098e97f81653921 Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.887732 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sbm2k"] Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.925574 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-glpvf" event={"ID":"250ffbd9-33d6-4a0d-b812-1d092341d4f9","Type":"ContainerStarted","Data":"cc9a388001d07f3511088bc5b867a073370c23dfc566a76ed6b25957f4cd9611"} Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.925620 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-glpvf" event={"ID":"250ffbd9-33d6-4a0d-b812-1d092341d4f9","Type":"ContainerStarted","Data":"de3e03d863b0420869eef2e3a7c17fb4d5f1b982bfb909ef8098e97f81653921"} Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.936261 4845 generic.go:334] "Generic (PLEG): container finished" podID="4dc8c937-0b9b-461a-be1b-02bdb587b70e" containerID="237eb45884084c45016b5f46a907fcfa367aa8dbef9b4eddbe88c4430a23713e" exitCode=0 Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.936318 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-v7wns" event={"ID":"4dc8c937-0b9b-461a-be1b-02bdb587b70e","Type":"ContainerDied","Data":"237eb45884084c45016b5f46a907fcfa367aa8dbef9b4eddbe88c4430a23713e"} Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.936343 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-v7wns" event={"ID":"4dc8c937-0b9b-461a-be1b-02bdb587b70e","Type":"ContainerStarted","Data":"5155295f42a65af746e2fad57322cd5d5a1ab40f98230c024d2700e401492665"} Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.952119 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-g8b4r"] Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.972392 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kxrm5" event={"ID":"250e18d9-cb14-4309-8d0c-fb341511dba6","Type":"ContainerStarted","Data":"17906d788647666b2cd6e069e9338a17bf04443c927ea4ca63e22baf04dbc8dc"} Feb 02 10:52:27 crc kubenswrapper[4845]: I0202 10:52:27.979701 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cpdt4"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.078384 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" event={"ID":"cc6155c0-72ff-4f97-9748-716e3df8ad88","Type":"ContainerDied","Data":"4cc1cd6ad5cc574e57eca193db9f213196ac2057c994f0da0d19824fee8e97c1"} Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.078445 4845 scope.go:117] "RemoveContainer" containerID="2d2c3127f10566f1a93e004f0e822b849cbd5d19678d12b2cc9041ee23459162" Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.079740 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g5r95" Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.138613 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gxjbc"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.165543 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-glpvf" podStartSLOduration=4.165517253 podStartE2EDuration="4.165517253s" podCreationTimestamp="2026-02-02 10:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:28.028632035 +0000 UTC m=+1229.120033485" watchObservedRunningTime="2026-02-02 10:52:28.165517253 +0000 UTC m=+1229.256918703" Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.174417 4845 scope.go:117] "RemoveContainer" containerID="3f008994368901472bc0803e0e3411d898bba47e6f98a3c829832d620028583b" Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.308740 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.382013 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5r95"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.407572 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.462258 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5r95"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.577942 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hft5g"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.621912 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.686818 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:28 crc kubenswrapper[4845]: I0202 10:52:28.964112 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.116951 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cpdt4" event={"ID":"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2","Type":"ContainerStarted","Data":"eaea8c79134a763eb67941fd983f9ce0e269d99bbc22b44421da606e30805a94"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.116999 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cpdt4" event={"ID":"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2","Type":"ContainerStarted","Data":"910eea5be92e0d78d64ac4574401cd26e5b94f8c77cd06cf2a9e9a5f781e5430"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.123990 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba91fb37-4550-4684-99bb-45dba169a879","Type":"ContainerStarted","Data":"320ae8c7183d27e383573443ad820bda2273505766b721abc86327e40604e0ca"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.124974 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.127761 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a","Type":"ContainerStarted","Data":"3a2962ac3acf70d6e69f1fadfc2545a5d2cf6a481bf3dd186493c288796a95b6"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.142587 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g8b4r" event={"ID":"183b0ef9-490f-43a1-a464-2bd64a820ebd","Type":"ContainerStarted","Data":"8487cd80461a5551ea17adb0f75cb6e4ce51ee5bd0eda70e468bd0162117e3f9"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.151119 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gxjbc" event={"ID":"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b","Type":"ContainerStarted","Data":"f5152a46482c33a388878313470bd17796ec05812413482c82f7e14ccb92881e"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.155678 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cpdt4" podStartSLOduration=4.155651884 podStartE2EDuration="4.155651884s" podCreationTimestamp="2026-02-02 10:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:29.133876472 +0000 UTC m=+1230.225277932" watchObservedRunningTime="2026-02-02 10:52:29.155651884 +0000 UTC m=+1230.247053334" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.159018 4845 generic.go:334] "Generic (PLEG): container finished" podID="08802fb3-9897-4819-a38b-fe13e8892b47" containerID="ad0d9ec1ecba0e033c137223df62f5b330d1ec35e21a3f0e070e5061487e39e2" exitCode=0 Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.159128 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" event={"ID":"08802fb3-9897-4819-a38b-fe13e8892b47","Type":"ContainerDied","Data":"ad0d9ec1ecba0e033c137223df62f5b330d1ec35e21a3f0e070e5061487e39e2"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.159176 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" event={"ID":"08802fb3-9897-4819-a38b-fe13e8892b47","Type":"ContainerStarted","Data":"990f2a82fd916ae58e607d4d041c7e0245ec85e42bc4953b989f05218c6f9e19"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.187845 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hft5g" event={"ID":"9868fb5b-b18e-42b0-8532-6e6a55da71d2","Type":"ContainerStarted","Data":"303b261fa422f333b504b71646d5d15f21982e8b9a4a8cba35c7d99137363acc"} Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.258410 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-swift-storage-0\") pod \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.258873 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgg4z\" (UniqueName: \"kubernetes.io/projected/4dc8c937-0b9b-461a-be1b-02bdb587b70e-kube-api-access-wgg4z\") pod \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.258937 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-config\") pod \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.258986 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-sb\") pod \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.259056 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-svc\") pod \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.259144 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-nb\") pod \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.266608 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc8c937-0b9b-461a-be1b-02bdb587b70e-kube-api-access-wgg4z" (OuterVolumeSpecName: "kube-api-access-wgg4z") pod "4dc8c937-0b9b-461a-be1b-02bdb587b70e" (UID: "4dc8c937-0b9b-461a-be1b-02bdb587b70e"). InnerVolumeSpecName "kube-api-access-wgg4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.273650 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.304396 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dc8c937-0b9b-461a-be1b-02bdb587b70e" (UID: "4dc8c937-0b9b-461a-be1b-02bdb587b70e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.312514 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-config" (OuterVolumeSpecName: "config") pod "4dc8c937-0b9b-461a-be1b-02bdb587b70e" (UID: "4dc8c937-0b9b-461a-be1b-02bdb587b70e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.321067 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dc8c937-0b9b-461a-be1b-02bdb587b70e" (UID: "4dc8c937-0b9b-461a-be1b-02bdb587b70e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.325444 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4dc8c937-0b9b-461a-be1b-02bdb587b70e" (UID: "4dc8c937-0b9b-461a-be1b-02bdb587b70e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.365159 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dc8c937-0b9b-461a-be1b-02bdb587b70e" (UID: "4dc8c937-0b9b-461a-be1b-02bdb587b70e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.366452 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-nb\") pod \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\" (UID: \"4dc8c937-0b9b-461a-be1b-02bdb587b70e\") " Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.367334 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgg4z\" (UniqueName: \"kubernetes.io/projected/4dc8c937-0b9b-461a-be1b-02bdb587b70e-kube-api-access-wgg4z\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.367818 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.367913 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.367994 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.368054 4845 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:29 crc kubenswrapper[4845]: W0202 10:52:29.368390 4845 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4dc8c937-0b9b-461a-be1b-02bdb587b70e/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.369052 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dc8c937-0b9b-461a-be1b-02bdb587b70e" (UID: "4dc8c937-0b9b-461a-be1b-02bdb587b70e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.471226 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc8c937-0b9b-461a-be1b-02bdb587b70e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:29 crc kubenswrapper[4845]: I0202 10:52:29.737776 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6155c0-72ff-4f97-9748-716e3df8ad88" path="/var/lib/kubelet/pods/cc6155c0-72ff-4f97-9748-716e3df8ad88/volumes" Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.209714 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a","Type":"ContainerStarted","Data":"754cb2f68170ee366688fac6af09583225928231cbd2adb4266116caffa237e6"} Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.212061 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-v7wns" event={"ID":"4dc8c937-0b9b-461a-be1b-02bdb587b70e","Type":"ContainerDied","Data":"5155295f42a65af746e2fad57322cd5d5a1ab40f98230c024d2700e401492665"} Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.212098 4845 scope.go:117] "RemoveContainer" containerID="237eb45884084c45016b5f46a907fcfa367aa8dbef9b4eddbe88c4430a23713e" Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.212195 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-v7wns" Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.215074 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7543ae2-53fd-42d7-971f-a09923f10187","Type":"ContainerStarted","Data":"9a806cef3c6476b0a1f1311cf8436474892e88fa49f5a547d7d4cfbbecc99d66"} Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.259142 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" event={"ID":"08802fb3-9897-4819-a38b-fe13e8892b47","Type":"ContainerStarted","Data":"3ed1c7658c68f0c8c4a3c24a82346d038eae78839b07b31f8eefe48619b71f5d"} Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.259244 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.323931 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" podStartSLOduration=5.323864119 podStartE2EDuration="5.323864119s" podCreationTimestamp="2026-02-02 10:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:30.286964426 +0000 UTC m=+1231.378365886" watchObservedRunningTime="2026-02-02 10:52:30.323864119 +0000 UTC m=+1231.415265569" Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.466591 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-v7wns"] Feb 02 10:52:30 crc kubenswrapper[4845]: I0202 10:52:30.473133 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-v7wns"] Feb 02 10:52:31 crc kubenswrapper[4845]: I0202 10:52:31.330873 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7543ae2-53fd-42d7-971f-a09923f10187","Type":"ContainerStarted","Data":"5afe1239bc585e0bef47e3d113767ef48eca1fb7a34ae2876787a3e542d61760"} Feb 02 10:52:31 crc kubenswrapper[4845]: I0202 10:52:31.726557 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc8c937-0b9b-461a-be1b-02bdb587b70e" path="/var/lib/kubelet/pods/4dc8c937-0b9b-461a-be1b-02bdb587b70e/volumes" Feb 02 10:52:32 crc kubenswrapper[4845]: I0202 10:52:32.348926 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7543ae2-53fd-42d7-971f-a09923f10187","Type":"ContainerStarted","Data":"bd78b5513d3cbaeab95964442b610e53834c55a502d340e969dbf735d6641a12"} Feb 02 10:52:32 crc kubenswrapper[4845]: I0202 10:52:32.348999 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" containerName="glance-log" containerID="cri-o://5afe1239bc585e0bef47e3d113767ef48eca1fb7a34ae2876787a3e542d61760" gracePeriod=30 Feb 02 10:52:32 crc kubenswrapper[4845]: I0202 10:52:32.349240 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" containerName="glance-httpd" containerID="cri-o://bd78b5513d3cbaeab95964442b610e53834c55a502d340e969dbf735d6641a12" gracePeriod=30 Feb 02 10:52:32 crc kubenswrapper[4845]: I0202 10:52:32.363176 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a","Type":"ContainerStarted","Data":"27b4758631a6299ed33bc9ad77808ae89e6b4adea1f9c02dc02478e0b78913fa"} Feb 02 10:52:32 crc kubenswrapper[4845]: I0202 10:52:32.363192 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerName="glance-log" containerID="cri-o://754cb2f68170ee366688fac6af09583225928231cbd2adb4266116caffa237e6" gracePeriod=30 Feb 02 10:52:32 crc kubenswrapper[4845]: I0202 10:52:32.363252 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerName="glance-httpd" containerID="cri-o://27b4758631a6299ed33bc9ad77808ae89e6b4adea1f9c02dc02478e0b78913fa" gracePeriod=30 Feb 02 10:52:32 crc kubenswrapper[4845]: I0202 10:52:32.402836 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.402807718 podStartE2EDuration="7.402807718s" podCreationTimestamp="2026-02-02 10:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:32.383470555 +0000 UTC m=+1233.474872005" watchObservedRunningTime="2026-02-02 10:52:32.402807718 +0000 UTC m=+1233.494209168" Feb 02 10:52:32 crc kubenswrapper[4845]: I0202 10:52:32.428547 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.428521372 podStartE2EDuration="7.428521372s" podCreationTimestamp="2026-02-02 10:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:32.417926689 +0000 UTC m=+1233.509328149" watchObservedRunningTime="2026-02-02 10:52:32.428521372 +0000 UTC m=+1233.519922822" Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.057097 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.062266 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.395121 4845 generic.go:334] "Generic (PLEG): container finished" podID="f7543ae2-53fd-42d7-971f-a09923f10187" containerID="bd78b5513d3cbaeab95964442b610e53834c55a502d340e969dbf735d6641a12" exitCode=0 Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.395160 4845 generic.go:334] "Generic (PLEG): container finished" podID="f7543ae2-53fd-42d7-971f-a09923f10187" containerID="5afe1239bc585e0bef47e3d113767ef48eca1fb7a34ae2876787a3e542d61760" exitCode=143 Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.395208 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7543ae2-53fd-42d7-971f-a09923f10187","Type":"ContainerDied","Data":"bd78b5513d3cbaeab95964442b610e53834c55a502d340e969dbf735d6641a12"} Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.395280 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7543ae2-53fd-42d7-971f-a09923f10187","Type":"ContainerDied","Data":"5afe1239bc585e0bef47e3d113767ef48eca1fb7a34ae2876787a3e542d61760"} Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.401378 4845 generic.go:334] "Generic (PLEG): container finished" podID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerID="27b4758631a6299ed33bc9ad77808ae89e6b4adea1f9c02dc02478e0b78913fa" exitCode=0 Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.401440 4845 generic.go:334] "Generic (PLEG): container finished" podID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerID="754cb2f68170ee366688fac6af09583225928231cbd2adb4266116caffa237e6" exitCode=143 Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.401468 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a","Type":"ContainerDied","Data":"27b4758631a6299ed33bc9ad77808ae89e6b4adea1f9c02dc02478e0b78913fa"} Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.401526 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a","Type":"ContainerDied","Data":"754cb2f68170ee366688fac6af09583225928231cbd2adb4266116caffa237e6"} Feb 02 10:52:33 crc kubenswrapper[4845]: I0202 10:52:33.407998 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 02 10:52:34 crc kubenswrapper[4845]: I0202 10:52:34.416281 4845 generic.go:334] "Generic (PLEG): container finished" podID="250ffbd9-33d6-4a0d-b812-1d092341d4f9" containerID="cc9a388001d07f3511088bc5b867a073370c23dfc566a76ed6b25957f4cd9611" exitCode=0 Feb 02 10:52:34 crc kubenswrapper[4845]: I0202 10:52:34.416355 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-glpvf" event={"ID":"250ffbd9-33d6-4a0d-b812-1d092341d4f9","Type":"ContainerDied","Data":"cc9a388001d07f3511088bc5b867a073370c23dfc566a76ed6b25957f4cd9611"} Feb 02 10:52:36 crc kubenswrapper[4845]: I0202 10:52:36.251732 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:52:36 crc kubenswrapper[4845]: I0202 10:52:36.317243 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7lh98"] Feb 02 10:52:36 crc kubenswrapper[4845]: I0202 10:52:36.317489 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7lh98" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="dnsmasq-dns" containerID="cri-o://7b934d9dc539ae78cc75a18c8a04ead9cef0ee49e2411933655a99884f524af5" gracePeriod=10 Feb 02 10:52:36 crc kubenswrapper[4845]: I0202 10:52:36.460730 4845 generic.go:334] "Generic (PLEG): container finished" podID="125bfda8-e971-4249-8b07-0bbff61e4725" containerID="7b934d9dc539ae78cc75a18c8a04ead9cef0ee49e2411933655a99884f524af5" exitCode=0 Feb 02 10:52:36 crc kubenswrapper[4845]: I0202 10:52:36.460804 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7lh98" event={"ID":"125bfda8-e971-4249-8b07-0bbff61e4725","Type":"ContainerDied","Data":"7b934d9dc539ae78cc75a18c8a04ead9cef0ee49e2411933655a99884f524af5"} Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.288390 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.303780 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.466769 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sgx4\" (UniqueName: \"kubernetes.io/projected/250ffbd9-33d6-4a0d-b812-1d092341d4f9-kube-api-access-9sgx4\") pod \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.466839 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-httpd-run\") pod \"f7543ae2-53fd-42d7-971f-a09923f10187\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.466909 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-combined-ca-bundle\") pod \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.466938 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-config-data\") pod \"f7543ae2-53fd-42d7-971f-a09923f10187\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467017 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-scripts\") pod \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467080 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-internal-tls-certs\") pod \"f7543ae2-53fd-42d7-971f-a09923f10187\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467267 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"f7543ae2-53fd-42d7-971f-a09923f10187\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467305 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-logs\") pod \"f7543ae2-53fd-42d7-971f-a09923f10187\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467328 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxcf6\" (UniqueName: \"kubernetes.io/projected/f7543ae2-53fd-42d7-971f-a09923f10187-kube-api-access-xxcf6\") pod \"f7543ae2-53fd-42d7-971f-a09923f10187\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467569 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f7543ae2-53fd-42d7-971f-a09923f10187" (UID: "f7543ae2-53fd-42d7-971f-a09923f10187"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467681 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-config-data\") pod \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467815 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-credential-keys\") pod \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467846 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-fernet-keys\") pod \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\" (UID: \"250ffbd9-33d6-4a0d-b812-1d092341d4f9\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.467998 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-combined-ca-bundle\") pod \"f7543ae2-53fd-42d7-971f-a09923f10187\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.468054 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-scripts\") pod \"f7543ae2-53fd-42d7-971f-a09923f10187\" (UID: \"f7543ae2-53fd-42d7-971f-a09923f10187\") " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.468290 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-logs" (OuterVolumeSpecName: "logs") pod "f7543ae2-53fd-42d7-971f-a09923f10187" (UID: "f7543ae2-53fd-42d7-971f-a09923f10187"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.469006 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.469026 4845 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7543ae2-53fd-42d7-971f-a09923f10187-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.490002 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-scripts" (OuterVolumeSpecName: "scripts") pod "f7543ae2-53fd-42d7-971f-a09923f10187" (UID: "f7543ae2-53fd-42d7-971f-a09923f10187"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.490139 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "250ffbd9-33d6-4a0d-b812-1d092341d4f9" (UID: "250ffbd9-33d6-4a0d-b812-1d092341d4f9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.491930 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-scripts" (OuterVolumeSpecName: "scripts") pod "250ffbd9-33d6-4a0d-b812-1d092341d4f9" (UID: "250ffbd9-33d6-4a0d-b812-1d092341d4f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.494171 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7543ae2-53fd-42d7-971f-a09923f10187-kube-api-access-xxcf6" (OuterVolumeSpecName: "kube-api-access-xxcf6") pod "f7543ae2-53fd-42d7-971f-a09923f10187" (UID: "f7543ae2-53fd-42d7-971f-a09923f10187"). InnerVolumeSpecName "kube-api-access-xxcf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.494660 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250ffbd9-33d6-4a0d-b812-1d092341d4f9-kube-api-access-9sgx4" (OuterVolumeSpecName: "kube-api-access-9sgx4") pod "250ffbd9-33d6-4a0d-b812-1d092341d4f9" (UID: "250ffbd9-33d6-4a0d-b812-1d092341d4f9"). InnerVolumeSpecName "kube-api-access-9sgx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.496232 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995" (OuterVolumeSpecName: "glance") pod "f7543ae2-53fd-42d7-971f-a09923f10187" (UID: "f7543ae2-53fd-42d7-971f-a09923f10187"). InnerVolumeSpecName "pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.506762 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "250ffbd9-33d6-4a0d-b812-1d092341d4f9" (UID: "250ffbd9-33d6-4a0d-b812-1d092341d4f9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.534219 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7543ae2-53fd-42d7-971f-a09923f10187" (UID: "f7543ae2-53fd-42d7-971f-a09923f10187"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.539493 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-config-data" (OuterVolumeSpecName: "config-data") pod "250ffbd9-33d6-4a0d-b812-1d092341d4f9" (UID: "250ffbd9-33d6-4a0d-b812-1d092341d4f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.547159 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.548075 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7543ae2-53fd-42d7-971f-a09923f10187","Type":"ContainerDied","Data":"9a806cef3c6476b0a1f1311cf8436474892e88fa49f5a547d7d4cfbbecc99d66"} Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.548139 4845 scope.go:117] "RemoveContainer" containerID="bd78b5513d3cbaeab95964442b610e53834c55a502d340e969dbf735d6641a12" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.562153 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-glpvf" event={"ID":"250ffbd9-33d6-4a0d-b812-1d092341d4f9","Type":"ContainerDied","Data":"de3e03d863b0420869eef2e3a7c17fb4d5f1b982bfb909ef8098e97f81653921"} Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.562191 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-glpvf" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.562221 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3e03d863b0420869eef2e3a7c17fb4d5f1b982bfb909ef8098e97f81653921" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.575034 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.585258 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.585358 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sgx4\" (UniqueName: \"kubernetes.io/projected/250ffbd9-33d6-4a0d-b812-1d092341d4f9-kube-api-access-9sgx4\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.585437 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.585542 4845 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") on node \"crc\" " Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.585630 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxcf6\" (UniqueName: \"kubernetes.io/projected/f7543ae2-53fd-42d7-971f-a09923f10187-kube-api-access-xxcf6\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.585727 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.585810 4845 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.585924 4845 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.604828 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "250ffbd9-33d6-4a0d-b812-1d092341d4f9" (UID: "250ffbd9-33d6-4a0d-b812-1d092341d4f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.607703 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-config-data" (OuterVolumeSpecName: "config-data") pod "f7543ae2-53fd-42d7-971f-a09923f10187" (UID: "f7543ae2-53fd-42d7-971f-a09923f10187"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.623691 4845 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.623807 4845 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995") on node "crc" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.624722 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f7543ae2-53fd-42d7-971f-a09923f10187" (UID: "f7543ae2-53fd-42d7-971f-a09923f10187"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.688598 4845 reconciler_common.go:293] "Volume detached for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.688645 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ffbd9-33d6-4a0d-b812-1d092341d4f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.688659 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.688671 4845 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7543ae2-53fd-42d7-971f-a09923f10187-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.763221 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7lh98" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.874172 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.889932 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.925512 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:39 crc kubenswrapper[4845]: E0202 10:52:39.926426 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" containerName="glance-httpd" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.926443 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" containerName="glance-httpd" Feb 02 10:52:39 crc kubenswrapper[4845]: E0202 10:52:39.926482 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6155c0-72ff-4f97-9748-716e3df8ad88" containerName="dnsmasq-dns" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.926490 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6155c0-72ff-4f97-9748-716e3df8ad88" containerName="dnsmasq-dns" Feb 02 10:52:39 crc kubenswrapper[4845]: E0202 10:52:39.926505 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250ffbd9-33d6-4a0d-b812-1d092341d4f9" containerName="keystone-bootstrap" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.926511 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="250ffbd9-33d6-4a0d-b812-1d092341d4f9" containerName="keystone-bootstrap" Feb 02 10:52:39 crc kubenswrapper[4845]: E0202 10:52:39.926530 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc8c937-0b9b-461a-be1b-02bdb587b70e" containerName="init" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.926536 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc8c937-0b9b-461a-be1b-02bdb587b70e" containerName="init" Feb 02 10:52:39 crc kubenswrapper[4845]: E0202 10:52:39.926557 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" containerName="glance-log" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.926564 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" containerName="glance-log" Feb 02 10:52:39 crc kubenswrapper[4845]: E0202 10:52:39.926577 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6155c0-72ff-4f97-9748-716e3df8ad88" containerName="init" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.926582 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6155c0-72ff-4f97-9748-716e3df8ad88" containerName="init" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.926997 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc8c937-0b9b-461a-be1b-02bdb587b70e" containerName="init" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.927022 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="250ffbd9-33d6-4a0d-b812-1d092341d4f9" containerName="keystone-bootstrap" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.927036 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" containerName="glance-log" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.927056 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6155c0-72ff-4f97-9748-716e3df8ad88" containerName="dnsmasq-dns" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.927079 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" containerName="glance-httpd" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.929973 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.934653 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.938228 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 10:52:39 crc kubenswrapper[4845]: I0202 10:52:39.939219 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.098910 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.099258 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.099400 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.099506 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.099616 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl7xq\" (UniqueName: \"kubernetes.io/projected/7f69079e-af81-421c-870a-2a08c1b2420e-kube-api-access-cl7xq\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.100168 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.100722 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.101166 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.203266 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.203652 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.203679 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.203726 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.203748 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl7xq\" (UniqueName: \"kubernetes.io/projected/7f69079e-af81-421c-870a-2a08c1b2420e-kube-api-access-cl7xq\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.203800 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.203825 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.203847 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.204591 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.205240 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.212647 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.212699 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2a65ad887e305c92ac1c8235cc9c5fc327f1ea7ce91b9974356e11ee00bc2f81/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.216046 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.216137 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.217275 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.217086 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.229900 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl7xq\" (UniqueName: \"kubernetes.io/projected/7f69079e-af81-421c-870a-2a08c1b2420e-kube-api-access-cl7xq\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.278217 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.440118 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-glpvf"] Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.454600 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-glpvf"] Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.562613 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.563188 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f7js4"] Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.564627 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.570697 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.571017 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.571110 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.571270 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4r6h5" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.572250 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.584590 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f7js4"] Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.717135 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-scripts\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.717227 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-fernet-keys\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.717302 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4cxd\" (UniqueName: \"kubernetes.io/projected/ae3aa591-f1f0-4264-a970-d8172cc24781-kube-api-access-v4cxd\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.717346 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-combined-ca-bundle\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.717415 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-credential-keys\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.717446 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-config-data\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.819645 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-credential-keys\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.819736 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-config-data\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.819909 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-scripts\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.819989 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-fernet-keys\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.820028 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4cxd\" (UniqueName: \"kubernetes.io/projected/ae3aa591-f1f0-4264-a970-d8172cc24781-kube-api-access-v4cxd\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.820090 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-combined-ca-bundle\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.824431 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-scripts\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.824719 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-credential-keys\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.824921 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-fernet-keys\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.824961 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-combined-ca-bundle\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.828868 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-config-data\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.856410 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4cxd\" (UniqueName: \"kubernetes.io/projected/ae3aa591-f1f0-4264-a970-d8172cc24781-kube-api-access-v4cxd\") pod \"keystone-bootstrap-f7js4\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:40 crc kubenswrapper[4845]: I0202 10:52:40.919685 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:52:41 crc kubenswrapper[4845]: I0202 10:52:41.725381 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250ffbd9-33d6-4a0d-b812-1d092341d4f9" path="/var/lib/kubelet/pods/250ffbd9-33d6-4a0d-b812-1d092341d4f9/volumes" Feb 02 10:52:41 crc kubenswrapper[4845]: I0202 10:52:41.726763 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7543ae2-53fd-42d7-971f-a09923f10187" path="/var/lib/kubelet/pods/f7543ae2-53fd-42d7-971f-a09923f10187/volumes" Feb 02 10:52:44 crc kubenswrapper[4845]: I0202 10:52:44.762792 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7lh98" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Feb 02 10:52:45 crc kubenswrapper[4845]: E0202 10:52:45.001007 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 02 10:52:45 crc kubenswrapper[4845]: E0202 10:52:45.001430 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjfj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-hft5g_openstack(9868fb5b-b18e-42b0-8532-6e6a55da71d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:52:45 crc kubenswrapper[4845]: E0202 10:52:45.002594 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-hft5g" podUID="9868fb5b-b18e-42b0-8532-6e6a55da71d2" Feb 02 10:52:45 crc kubenswrapper[4845]: E0202 10:52:45.656365 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-hft5g" podUID="9868fb5b-b18e-42b0-8532-6e6a55da71d2" Feb 02 10:52:47 crc kubenswrapper[4845]: E0202 10:52:47.088202 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 02 10:52:47 crc kubenswrapper[4845]: E0202 10:52:47.088514 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h67bh5b9h5fh688h565hb4h685h5d4h5cbh65fh644h54bh79hbdh59dh74h666h56bh7fh584h688hddh565h646h5fbh5h78h596hf7h656hcdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4g7v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ba91fb37-4550-4684-99bb-45dba169a879): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.219045 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.304526 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4hxp\" (UniqueName: \"kubernetes.io/projected/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-kube-api-access-d4hxp\") pod \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.305108 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-httpd-run\") pod \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.305206 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-logs\") pod \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.305228 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-combined-ca-bundle\") pod \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.305266 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-scripts\") pod \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.305294 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-config-data\") pod \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.305345 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-public-tls-certs\") pod \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.305458 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\" (UID: \"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a\") " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.306125 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" (UID: "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.306141 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-logs" (OuterVolumeSpecName: "logs") pod "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" (UID: "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.307625 4845 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.307675 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.315290 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-scripts" (OuterVolumeSpecName: "scripts") pod "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" (UID: "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.323081 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-kube-api-access-d4hxp" (OuterVolumeSpecName: "kube-api-access-d4hxp") pod "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" (UID: "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a"). InnerVolumeSpecName "kube-api-access-d4hxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.343504 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7" (OuterVolumeSpecName: "glance") pod "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" (UID: "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a"). InnerVolumeSpecName "pvc-a16f116c-8f63-4ae9-a645-587add90fda7". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.387088 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" (UID: "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.393268 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" (UID: "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.402095 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-config-data" (OuterVolumeSpecName: "config-data") pod "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" (UID: "07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.410278 4845 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") on node \"crc\" " Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.410324 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4hxp\" (UniqueName: \"kubernetes.io/projected/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-kube-api-access-d4hxp\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.410337 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.410347 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.410356 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.410364 4845 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.447623 4845 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.447810 4845 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a16f116c-8f63-4ae9-a645-587add90fda7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7") on node "crc" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.512683 4845 reconciler_common.go:293] "Volume detached for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.678084 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a","Type":"ContainerDied","Data":"3a2962ac3acf70d6e69f1fadfc2545a5d2cf6a481bf3dd186493c288796a95b6"} Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.678136 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.744401 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.744441 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.761336 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:47 crc kubenswrapper[4845]: E0202 10:52:47.761898 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerName="glance-log" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.761911 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerName="glance-log" Feb 02 10:52:47 crc kubenswrapper[4845]: E0202 10:52:47.761963 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerName="glance-httpd" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.761972 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerName="glance-httpd" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.762275 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerName="glance-httpd" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.762306 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" containerName="glance-log" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.769707 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.773202 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.773235 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.782147 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.921356 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.921767 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-scripts\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.921850 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-config-data\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.922076 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.922168 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzv2r\" (UniqueName: \"kubernetes.io/projected/48aa6807-1e0b-4eab-8255-01c885a24550-kube-api-access-nzv2r\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.922418 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.922455 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-logs\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:47 crc kubenswrapper[4845]: I0202 10:52:47.922513 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.024395 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.024550 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.024580 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-scripts\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.024614 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-config-data\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.024721 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.024763 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzv2r\" (UniqueName: \"kubernetes.io/projected/48aa6807-1e0b-4eab-8255-01c885a24550-kube-api-access-nzv2r\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.024801 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.024831 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-logs\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.025129 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.025187 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-logs\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.027570 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.027606 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ed60d0e4f5ad2fb51e67cadb4519054184ad51c31b402d173121c9411d32387/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.029917 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-scripts\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.030256 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.031724 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.036823 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-config-data\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.044241 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzv2r\" (UniqueName: \"kubernetes.io/projected/48aa6807-1e0b-4eab-8255-01c885a24550-kube-api-access-nzv2r\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.079676 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " pod="openstack/glance-default-external-api-0" Feb 02 10:52:48 crc kubenswrapper[4845]: I0202 10:52:48.094987 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:52:49 crc kubenswrapper[4845]: I0202 10:52:49.728131 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a" path="/var/lib/kubelet/pods/07c692d7-14b8-42ef-8f5b-9c88fbfd5d6a/volumes" Feb 02 10:52:51 crc kubenswrapper[4845]: I0202 10:52:51.627242 4845 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode0a38136-159d-482a-988e-07f3b77fdbb4"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode0a38136-159d-482a-988e-07f3b77fdbb4] : Timed out while waiting for systemd to remove kubepods-besteffort-pode0a38136_159d_482a_988e_07f3b77fdbb4.slice" Feb 02 10:52:51 crc kubenswrapper[4845]: E0202 10:52:51.627776 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pode0a38136-159d-482a-988e-07f3b77fdbb4] : unable to destroy cgroup paths for cgroup [kubepods besteffort pode0a38136-159d-482a-988e-07f3b77fdbb4] : Timed out while waiting for systemd to remove kubepods-besteffort-pode0a38136_159d_482a_988e_07f3b77fdbb4.slice" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" podUID="e0a38136-159d-482a-988e-07f3b77fdbb4" Feb 02 10:52:51 crc kubenswrapper[4845]: I0202 10:52:51.723732 4845 generic.go:334] "Generic (PLEG): container finished" podID="856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2" containerID="eaea8c79134a763eb67941fd983f9ce0e269d99bbc22b44421da606e30805a94" exitCode=0 Feb 02 10:52:51 crc kubenswrapper[4845]: I0202 10:52:51.723859 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-g827z" Feb 02 10:52:51 crc kubenswrapper[4845]: I0202 10:52:51.724647 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cpdt4" event={"ID":"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2","Type":"ContainerDied","Data":"eaea8c79134a763eb67941fd983f9ce0e269d99bbc22b44421da606e30805a94"} Feb 02 10:52:51 crc kubenswrapper[4845]: I0202 10:52:51.851078 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-g827z"] Feb 02 10:52:51 crc kubenswrapper[4845]: I0202 10:52:51.873315 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-g827z"] Feb 02 10:52:53 crc kubenswrapper[4845]: I0202 10:52:53.727477 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a38136-159d-482a-988e-07f3b77fdbb4" path="/var/lib/kubelet/pods/e0a38136-159d-482a-988e-07f3b77fdbb4/volumes" Feb 02 10:52:54 crc kubenswrapper[4845]: I0202 10:52:54.763065 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7lh98" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Feb 02 10:52:54 crc kubenswrapper[4845]: I0202 10:52:54.763550 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.265784 4845 scope.go:117] "RemoveContainer" containerID="5afe1239bc585e0bef47e3d113767ef48eca1fb7a34ae2876787a3e542d61760" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.425636 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.434099 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.542280 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-combined-ca-bundle\") pod \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.542353 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-sb\") pod \"125bfda8-e971-4249-8b07-0bbff61e4725\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.542589 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-config\") pod \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.542633 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zllkd\" (UniqueName: \"kubernetes.io/projected/125bfda8-e971-4249-8b07-0bbff61e4725-kube-api-access-zllkd\") pod \"125bfda8-e971-4249-8b07-0bbff61e4725\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.542729 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mpk6\" (UniqueName: \"kubernetes.io/projected/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-kube-api-access-9mpk6\") pod \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\" (UID: \"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2\") " Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.542820 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-nb\") pod \"125bfda8-e971-4249-8b07-0bbff61e4725\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.542872 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-config\") pod \"125bfda8-e971-4249-8b07-0bbff61e4725\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.542917 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-dns-svc\") pod \"125bfda8-e971-4249-8b07-0bbff61e4725\" (UID: \"125bfda8-e971-4249-8b07-0bbff61e4725\") " Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.548801 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-kube-api-access-9mpk6" (OuterVolumeSpecName: "kube-api-access-9mpk6") pod "856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2" (UID: "856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2"). InnerVolumeSpecName "kube-api-access-9mpk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.549365 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125bfda8-e971-4249-8b07-0bbff61e4725-kube-api-access-zllkd" (OuterVolumeSpecName: "kube-api-access-zllkd") pod "125bfda8-e971-4249-8b07-0bbff61e4725" (UID: "125bfda8-e971-4249-8b07-0bbff61e4725"). InnerVolumeSpecName "kube-api-access-zllkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.585914 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-config" (OuterVolumeSpecName: "config") pod "856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2" (UID: "856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.591335 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2" (UID: "856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.608659 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-config" (OuterVolumeSpecName: "config") pod "125bfda8-e971-4249-8b07-0bbff61e4725" (UID: "125bfda8-e971-4249-8b07-0bbff61e4725"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.611266 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "125bfda8-e971-4249-8b07-0bbff61e4725" (UID: "125bfda8-e971-4249-8b07-0bbff61e4725"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.618063 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "125bfda8-e971-4249-8b07-0bbff61e4725" (UID: "125bfda8-e971-4249-8b07-0bbff61e4725"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.628073 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "125bfda8-e971-4249-8b07-0bbff61e4725" (UID: "125bfda8-e971-4249-8b07-0bbff61e4725"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.646568 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mpk6\" (UniqueName: \"kubernetes.io/projected/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-kube-api-access-9mpk6\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.646602 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.646614 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.646623 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.646634 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.646643 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125bfda8-e971-4249-8b07-0bbff61e4725-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.646652 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.646663 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zllkd\" (UniqueName: \"kubernetes.io/projected/125bfda8-e971-4249-8b07-0bbff61e4725-kube-api-access-zllkd\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.773684 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cpdt4" event={"ID":"856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2","Type":"ContainerDied","Data":"910eea5be92e0d78d64ac4574401cd26e5b94f8c77cd06cf2a9e9a5f781e5430"} Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.773725 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="910eea5be92e0d78d64ac4574401cd26e5b94f8c77cd06cf2a9e9a5f781e5430" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.773777 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cpdt4" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.781873 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7lh98" event={"ID":"125bfda8-e971-4249-8b07-0bbff61e4725","Type":"ContainerDied","Data":"d8710db5f1971bcb1ada6e2682b3528a8c529ad636b2e603fac42dddaaffa6b0"} Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.781946 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7lh98" Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.814611 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7lh98"] Feb 02 10:52:55 crc kubenswrapper[4845]: I0202 10:52:55.833244 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7lh98"] Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.733742 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mrvw8"] Feb 02 10:52:56 crc kubenswrapper[4845]: E0202 10:52:56.734304 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="dnsmasq-dns" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.734328 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="dnsmasq-dns" Feb 02 10:52:56 crc kubenswrapper[4845]: E0202 10:52:56.734359 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="init" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.734369 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="init" Feb 02 10:52:56 crc kubenswrapper[4845]: E0202 10:52:56.734391 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2" containerName="neutron-db-sync" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.734399 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2" containerName="neutron-db-sync" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.734641 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="dnsmasq-dns" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.734688 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2" containerName="neutron-db-sync" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.737122 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.762354 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mrvw8"] Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.880866 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.880955 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.880977 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.881018 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tnwb\" (UniqueName: \"kubernetes.io/projected/33b0a7fa-f66e-470e-95a3-a110ecec168b-kube-api-access-2tnwb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.881047 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-config\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.881166 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.938701 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c658d9d4-mvn9b"] Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.941985 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.944843 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.945270 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.945461 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tjnkn" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.945776 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.956576 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c658d9d4-mvn9b"] Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.983546 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.983614 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.983634 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.983679 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tnwb\" (UniqueName: \"kubernetes.io/projected/33b0a7fa-f66e-470e-95a3-a110ecec168b-kube-api-access-2tnwb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.983756 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-config\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.983860 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.984738 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.985239 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.985761 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.986311 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-config\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:56 crc kubenswrapper[4845]: I0202 10:52:56.986382 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.017346 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tnwb\" (UniqueName: \"kubernetes.io/projected/33b0a7fa-f66e-470e-95a3-a110ecec168b-kube-api-access-2tnwb\") pod \"dnsmasq-dns-55f844cf75-mrvw8\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.078756 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.086123 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc9m4\" (UniqueName: \"kubernetes.io/projected/381d0503-4113-48e1-a344-88e990400075-kube-api-access-sc9m4\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.086305 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-combined-ca-bundle\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.086355 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-httpd-config\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.086458 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-config\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.086496 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-ovndb-tls-certs\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.188534 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9m4\" (UniqueName: \"kubernetes.io/projected/381d0503-4113-48e1-a344-88e990400075-kube-api-access-sc9m4\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.189068 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-combined-ca-bundle\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.189094 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-httpd-config\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.189212 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-config\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.189287 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-ovndb-tls-certs\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.193930 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-combined-ca-bundle\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.204655 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-ovndb-tls-certs\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.206575 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9m4\" (UniqueName: \"kubernetes.io/projected/381d0503-4113-48e1-a344-88e990400075-kube-api-access-sc9m4\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.206618 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-httpd-config\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.221493 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-config\") pod \"neutron-7c658d9d4-mvn9b\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.261945 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.735004 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" path="/var/lib/kubelet/pods/125bfda8-e971-4249-8b07-0bbff61e4725/volumes" Feb 02 10:52:57 crc kubenswrapper[4845]: E0202 10:52:57.765031 4845 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 10:52:57 crc kubenswrapper[4845]: E0202 10:52:57.765226 4845 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rflw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-g8b4r_openstack(183b0ef9-490f-43a1-a464-2bd64a820ebd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:52:57 crc kubenswrapper[4845]: E0202 10:52:57.767736 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-g8b4r" podUID="183b0ef9-490f-43a1-a464-2bd64a820ebd" Feb 02 10:52:57 crc kubenswrapper[4845]: I0202 10:52:57.778012 4845 scope.go:117] "RemoveContainer" containerID="27b4758631a6299ed33bc9ad77808ae89e6b4adea1f9c02dc02478e0b78913fa" Feb 02 10:52:57 crc kubenswrapper[4845]: E0202 10:52:57.834095 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-g8b4r" podUID="183b0ef9-490f-43a1-a464-2bd64a820ebd" Feb 02 10:52:58 crc kubenswrapper[4845]: I0202 10:52:58.314382 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:52:58 crc kubenswrapper[4845]: W0202 10:52:58.441817 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f69079e_af81_421c_870a_2a08c1b2420e.slice/crio-4d7f9cbfa39969e34e788860572765e355857e65be648d766beb851dfaf208a7 WatchSource:0}: Error finding container 4d7f9cbfa39969e34e788860572765e355857e65be648d766beb851dfaf208a7: Status 404 returned error can't find the container with id 4d7f9cbfa39969e34e788860572765e355857e65be648d766beb851dfaf208a7 Feb 02 10:52:58 crc kubenswrapper[4845]: I0202 10:52:58.482526 4845 scope.go:117] "RemoveContainer" containerID="754cb2f68170ee366688fac6af09583225928231cbd2adb4266116caffa237e6" Feb 02 10:52:58 crc kubenswrapper[4845]: I0202 10:52:58.711960 4845 scope.go:117] "RemoveContainer" containerID="7b934d9dc539ae78cc75a18c8a04ead9cef0ee49e2411933655a99884f524af5" Feb 02 10:52:58 crc kubenswrapper[4845]: I0202 10:52:58.870898 4845 scope.go:117] "RemoveContainer" containerID="ddd6909bbdf16b8af6fed8c71335e6bc1892bf152856011ceb433a2a497011e2" Feb 02 10:52:58 crc kubenswrapper[4845]: I0202 10:52:58.888502 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f69079e-af81-421c-870a-2a08c1b2420e","Type":"ContainerStarted","Data":"4d7f9cbfa39969e34e788860572765e355857e65be648d766beb851dfaf208a7"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.004824 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6898599c95-65qmn"] Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.006774 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.012739 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.013113 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.039170 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6898599c95-65qmn"] Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.146052 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-combined-ca-bundle\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.146446 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-internal-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.146491 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-config\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.146542 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-public-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.146587 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-httpd-config\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.146629 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-ovndb-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.146704 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9ml\" (UniqueName: \"kubernetes.io/projected/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-kube-api-access-mk9ml\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.236380 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.295389 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-combined-ca-bundle\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.295502 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-internal-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.295545 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-config\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.295611 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-public-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.295672 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-httpd-config\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.295725 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-ovndb-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.295833 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk9ml\" (UniqueName: \"kubernetes.io/projected/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-kube-api-access-mk9ml\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.314363 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-combined-ca-bundle\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.324449 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-ovndb-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.328952 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-httpd-config\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.329032 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f7js4"] Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.329344 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-internal-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.338154 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk9ml\" (UniqueName: \"kubernetes.io/projected/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-kube-api-access-mk9ml\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.338967 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-config\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.343646 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-public-tls-certs\") pod \"neutron-6898599c95-65qmn\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.353238 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mrvw8"] Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.425394 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.569548 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c658d9d4-mvn9b"] Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.764040 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7lh98" podUID="125bfda8-e971-4249-8b07-0bbff61e4725" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.916745 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7js4" event={"ID":"ae3aa591-f1f0-4264-a970-d8172cc24781","Type":"ContainerStarted","Data":"055042cb2d15b6eebd454cdc9f356a4e981de86dda6b046eef4e17f7f79f827f"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.917111 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7js4" event={"ID":"ae3aa591-f1f0-4264-a970-d8172cc24781","Type":"ContainerStarted","Data":"a92252ab7199355c9918d53c338f871f76fb1b5cb5bdfbecf7537a0811f888ee"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.926505 4845 generic.go:334] "Generic (PLEG): container finished" podID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerID="720ce281ce34166ec16d662e6b2cfb3e73984fcbc45102640b4bba9183262c78" exitCode=0 Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.926594 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" event={"ID":"33b0a7fa-f66e-470e-95a3-a110ecec168b","Type":"ContainerDied","Data":"720ce281ce34166ec16d662e6b2cfb3e73984fcbc45102640b4bba9183262c78"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.926624 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" event={"ID":"33b0a7fa-f66e-470e-95a3-a110ecec168b","Type":"ContainerStarted","Data":"24bfd21204b757a912a5fde09fd51c09b7d4dece5bb10e7fce06a7581904b6df"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.942560 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f7js4" podStartSLOduration=19.942537954 podStartE2EDuration="19.942537954s" podCreationTimestamp="2026-02-02 10:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:52:59.940356632 +0000 UTC m=+1261.031758082" watchObservedRunningTime="2026-02-02 10:52:59.942537954 +0000 UTC m=+1261.033939404" Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.947353 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f69079e-af81-421c-870a-2a08c1b2420e","Type":"ContainerStarted","Data":"4f20fb73216b64db7b3a8f01b837e3181d5ffdb5d4d8bf409ca31ea2e79b0bcd"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.951911 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba91fb37-4550-4684-99bb-45dba169a879","Type":"ContainerStarted","Data":"4c795b185512cbaa08a41087a290ab376088c31c6277e1a8f0ee3f21dc22200f"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.959050 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kxrm5" event={"ID":"250e18d9-cb14-4309-8d0c-fb341511dba6","Type":"ContainerStarted","Data":"bf4552b15f381b58f4ac832ee06ab76b9eb11fbeaace10aae25c2f5b9bfbac69"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.978093 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48aa6807-1e0b-4eab-8255-01c885a24550","Type":"ContainerStarted","Data":"046426bd44987200c7af4b3e2c8d25a4f244d0fd82b246a7123cbf79584728b3"} Feb 02 10:52:59 crc kubenswrapper[4845]: I0202 10:52:59.993578 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-kxrm5" podStartSLOduration=4.531212257 podStartE2EDuration="34.993557811s" podCreationTimestamp="2026-02-02 10:52:25 +0000 UTC" firstStartedPulling="2026-02-02 10:52:27.323146892 +0000 UTC m=+1228.414548342" lastFinishedPulling="2026-02-02 10:52:57.785492446 +0000 UTC m=+1258.876893896" observedRunningTime="2026-02-02 10:52:59.981316171 +0000 UTC m=+1261.072717621" watchObservedRunningTime="2026-02-02 10:52:59.993557811 +0000 UTC m=+1261.084959261" Feb 02 10:53:00 crc kubenswrapper[4845]: I0202 10:53:00.029247 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c658d9d4-mvn9b" event={"ID":"381d0503-4113-48e1-a344-88e990400075","Type":"ContainerStarted","Data":"e0403804c19c7d887fabdb04dd94aa27e4ca3026135843fda8f93cf69fb0c8cd"} Feb 02 10:53:00 crc kubenswrapper[4845]: I0202 10:53:00.070711 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gxjbc" event={"ID":"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b","Type":"ContainerStarted","Data":"e50f62cc9707edb269ffe6698207592dfbc48d0ade6ff635de9614b2c3d62a34"} Feb 02 10:53:00 crc kubenswrapper[4845]: I0202 10:53:00.089614 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gxjbc" podStartSLOduration=5.458127102 podStartE2EDuration="35.089595833s" podCreationTimestamp="2026-02-02 10:52:25 +0000 UTC" firstStartedPulling="2026-02-02 10:52:28.078615502 +0000 UTC m=+1229.170016952" lastFinishedPulling="2026-02-02 10:52:57.710084233 +0000 UTC m=+1258.801485683" observedRunningTime="2026-02-02 10:53:00.088463851 +0000 UTC m=+1261.179865311" watchObservedRunningTime="2026-02-02 10:53:00.089595833 +0000 UTC m=+1261.180997273" Feb 02 10:53:00 crc kubenswrapper[4845]: I0202 10:53:00.117620 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6898599c95-65qmn"] Feb 02 10:53:00 crc kubenswrapper[4845]: W0202 10:53:00.149606 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d345372_d7c4_4094_b9cb_e2afbd2dbf54.slice/crio-00ea85117a3a7a613efb0ee0b197731f8faf53d379b635789e63fb18dc175257 WatchSource:0}: Error finding container 00ea85117a3a7a613efb0ee0b197731f8faf53d379b635789e63fb18dc175257: Status 404 returned error can't find the container with id 00ea85117a3a7a613efb0ee0b197731f8faf53d379b635789e63fb18dc175257 Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.086878 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6898599c95-65qmn" event={"ID":"0d345372-d7c4-4094-b9cb-e2afbd2dbf54","Type":"ContainerStarted","Data":"9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d"} Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.088869 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.089128 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6898599c95-65qmn" event={"ID":"0d345372-d7c4-4094-b9cb-e2afbd2dbf54","Type":"ContainerStarted","Data":"c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618"} Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.089151 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6898599c95-65qmn" event={"ID":"0d345372-d7c4-4094-b9cb-e2afbd2dbf54","Type":"ContainerStarted","Data":"00ea85117a3a7a613efb0ee0b197731f8faf53d379b635789e63fb18dc175257"} Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.092944 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48aa6807-1e0b-4eab-8255-01c885a24550","Type":"ContainerStarted","Data":"1e24bbe2d8cd0583fc986cf6fd412f527daea58b4a85706eb389322bf6ad3af7"} Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.109423 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c658d9d4-mvn9b" event={"ID":"381d0503-4113-48e1-a344-88e990400075","Type":"ContainerStarted","Data":"0318987d82017a4372eee65da6e584e622e1a0531f87350923bd3ea8ad37c0e3"} Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.109480 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c658d9d4-mvn9b" event={"ID":"381d0503-4113-48e1-a344-88e990400075","Type":"ContainerStarted","Data":"2ea8dbd44d235dcde26b7387e5ca4d94d64fb20bb5d89d73c2a48c90da6ef1d6"} Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.110728 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.118836 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6898599c95-65qmn" podStartSLOduration=3.11881324 podStartE2EDuration="3.11881324s" podCreationTimestamp="2026-02-02 10:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:01.105203381 +0000 UTC m=+1262.196604831" watchObservedRunningTime="2026-02-02 10:53:01.11881324 +0000 UTC m=+1262.210214690" Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.128994 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" event={"ID":"33b0a7fa-f66e-470e-95a3-a110ecec168b","Type":"ContainerStarted","Data":"b19033b665f9424ff6ceb80860a3f821b9318c6ff803e1c8ca9ff4c4824c2a24"} Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.129059 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.148248 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f69079e-af81-421c-870a-2a08c1b2420e","Type":"ContainerStarted","Data":"e01c6288e6f46f1f5e6d28589b3c379a1a0749fccb437477b6b9e2e2597123a4"} Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.157147 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c658d9d4-mvn9b" podStartSLOduration=5.157123774 podStartE2EDuration="5.157123774s" podCreationTimestamp="2026-02-02 10:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:01.134456557 +0000 UTC m=+1262.225858007" watchObservedRunningTime="2026-02-02 10:53:01.157123774 +0000 UTC m=+1262.248525224" Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.164767 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" podStartSLOduration=5.164743451 podStartE2EDuration="5.164743451s" podCreationTimestamp="2026-02-02 10:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:01.159424639 +0000 UTC m=+1262.250826089" watchObservedRunningTime="2026-02-02 10:53:01.164743451 +0000 UTC m=+1262.256144901" Feb 02 10:53:01 crc kubenswrapper[4845]: I0202 10:53:01.188348 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=22.188326525 podStartE2EDuration="22.188326525s" podCreationTimestamp="2026-02-02 10:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:01.185094972 +0000 UTC m=+1262.276496422" watchObservedRunningTime="2026-02-02 10:53:01.188326525 +0000 UTC m=+1262.279727975" Feb 02 10:53:02 crc kubenswrapper[4845]: I0202 10:53:02.164483 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48aa6807-1e0b-4eab-8255-01c885a24550","Type":"ContainerStarted","Data":"5c7e1e0f5ba6836be4b2cc0a23514474d1930ec71f3bf7b3e6b27bbccac7ee40"} Feb 02 10:53:02 crc kubenswrapper[4845]: I0202 10:53:02.199682 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.199660561 podStartE2EDuration="15.199660561s" podCreationTimestamp="2026-02-02 10:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:02.187098232 +0000 UTC m=+1263.278499692" watchObservedRunningTime="2026-02-02 10:53:02.199660561 +0000 UTC m=+1263.291062011" Feb 02 10:53:04 crc kubenswrapper[4845]: I0202 10:53:04.191622 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hft5g" event={"ID":"9868fb5b-b18e-42b0-8532-6e6a55da71d2","Type":"ContainerStarted","Data":"1ae234554f1bf061b9f12986d072427a2f50a26aa3b60b9c9cf12a5ccc0e8cce"} Feb 02 10:53:04 crc kubenswrapper[4845]: I0202 10:53:04.228082 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-hft5g" podStartSLOduration=6.099840694 podStartE2EDuration="39.228061575s" podCreationTimestamp="2026-02-02 10:52:25 +0000 UTC" firstStartedPulling="2026-02-02 10:52:28.502477844 +0000 UTC m=+1229.593879294" lastFinishedPulling="2026-02-02 10:53:01.630698725 +0000 UTC m=+1262.722100175" observedRunningTime="2026-02-02 10:53:04.21560483 +0000 UTC m=+1265.307006280" watchObservedRunningTime="2026-02-02 10:53:04.228061575 +0000 UTC m=+1265.319463025" Feb 02 10:53:05 crc kubenswrapper[4845]: I0202 10:53:05.207722 4845 generic.go:334] "Generic (PLEG): container finished" podID="ae3aa591-f1f0-4264-a970-d8172cc24781" containerID="055042cb2d15b6eebd454cdc9f356a4e981de86dda6b046eef4e17f7f79f827f" exitCode=0 Feb 02 10:53:05 crc kubenswrapper[4845]: I0202 10:53:05.207785 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7js4" event={"ID":"ae3aa591-f1f0-4264-a970-d8172cc24781","Type":"ContainerDied","Data":"055042cb2d15b6eebd454cdc9f356a4e981de86dda6b046eef4e17f7f79f827f"} Feb 02 10:53:05 crc kubenswrapper[4845]: I0202 10:53:05.211548 4845 generic.go:334] "Generic (PLEG): container finished" podID="1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" containerID="e50f62cc9707edb269ffe6698207592dfbc48d0ade6ff635de9614b2c3d62a34" exitCode=0 Feb 02 10:53:05 crc kubenswrapper[4845]: I0202 10:53:05.211594 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gxjbc" event={"ID":"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b","Type":"ContainerDied","Data":"e50f62cc9707edb269ffe6698207592dfbc48d0ade6ff635de9614b2c3d62a34"} Feb 02 10:53:06 crc kubenswrapper[4845]: I0202 10:53:06.240656 4845 generic.go:334] "Generic (PLEG): container finished" podID="250e18d9-cb14-4309-8d0c-fb341511dba6" containerID="bf4552b15f381b58f4ac832ee06ab76b9eb11fbeaace10aae25c2f5b9bfbac69" exitCode=0 Feb 02 10:53:06 crc kubenswrapper[4845]: I0202 10:53:06.240751 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kxrm5" event={"ID":"250e18d9-cb14-4309-8d0c-fb341511dba6","Type":"ContainerDied","Data":"bf4552b15f381b58f4ac832ee06ab76b9eb11fbeaace10aae25c2f5b9bfbac69"} Feb 02 10:53:06 crc kubenswrapper[4845]: I0202 10:53:06.966374 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:53:06 crc kubenswrapper[4845]: I0202 10:53:06.995994 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gxjbc" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.037230 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn4fj\" (UniqueName: \"kubernetes.io/projected/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-kube-api-access-vn4fj\") pod \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.037599 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-combined-ca-bundle\") pod \"ae3aa591-f1f0-4264-a970-d8172cc24781\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.037773 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-logs\") pod \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.038120 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-scripts\") pod \"ae3aa591-f1f0-4264-a970-d8172cc24781\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.038357 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4cxd\" (UniqueName: \"kubernetes.io/projected/ae3aa591-f1f0-4264-a970-d8172cc24781-kube-api-access-v4cxd\") pod \"ae3aa591-f1f0-4264-a970-d8172cc24781\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.038516 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-credential-keys\") pod \"ae3aa591-f1f0-4264-a970-d8172cc24781\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.038704 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-combined-ca-bundle\") pod \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.038872 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-config-data\") pod \"ae3aa591-f1f0-4264-a970-d8172cc24781\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.039281 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-fernet-keys\") pod \"ae3aa591-f1f0-4264-a970-d8172cc24781\" (UID: \"ae3aa591-f1f0-4264-a970-d8172cc24781\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.039616 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-scripts\") pod \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.041520 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-config-data\") pod \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\" (UID: \"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b\") " Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.041037 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-logs" (OuterVolumeSpecName: "logs") pod "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" (UID: "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.049642 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ae3aa591-f1f0-4264-a970-d8172cc24781" (UID: "ae3aa591-f1f0-4264-a970-d8172cc24781"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.055463 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-scripts" (OuterVolumeSpecName: "scripts") pod "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" (UID: "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.055854 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-scripts" (OuterVolumeSpecName: "scripts") pod "ae3aa591-f1f0-4264-a970-d8172cc24781" (UID: "ae3aa591-f1f0-4264-a970-d8172cc24781"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.055938 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ae3aa591-f1f0-4264-a970-d8172cc24781" (UID: "ae3aa591-f1f0-4264-a970-d8172cc24781"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.067437 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3aa591-f1f0-4264-a970-d8172cc24781-kube-api-access-v4cxd" (OuterVolumeSpecName: "kube-api-access-v4cxd") pod "ae3aa591-f1f0-4264-a970-d8172cc24781" (UID: "ae3aa591-f1f0-4264-a970-d8172cc24781"). InnerVolumeSpecName "kube-api-access-v4cxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.068120 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-kube-api-access-vn4fj" (OuterVolumeSpecName: "kube-api-access-vn4fj") pod "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" (UID: "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b"). InnerVolumeSpecName "kube-api-access-vn4fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.083442 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.107741 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae3aa591-f1f0-4264-a970-d8172cc24781" (UID: "ae3aa591-f1f0-4264-a970-d8172cc24781"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.143767 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" (UID: "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.151932 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn4fj\" (UniqueName: \"kubernetes.io/projected/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-kube-api-access-vn4fj\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.152282 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.152588 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.152601 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.152612 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4cxd\" (UniqueName: \"kubernetes.io/projected/ae3aa591-f1f0-4264-a970-d8172cc24781-kube-api-access-v4cxd\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.152620 4845 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.152629 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.152637 4845 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.152646 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.186518 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sbm2k"] Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.186834 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" podUID="08802fb3-9897-4819-a38b-fe13e8892b47" containerName="dnsmasq-dns" containerID="cri-o://3ed1c7658c68f0c8c4a3c24a82346d038eae78839b07b31f8eefe48619b71f5d" gracePeriod=10 Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.205488 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-config-data" (OuterVolumeSpecName: "config-data") pod "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" (UID: "1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.208543 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-config-data" (OuterVolumeSpecName: "config-data") pod "ae3aa591-f1f0-4264-a970-d8172cc24781" (UID: "ae3aa591-f1f0-4264-a970-d8172cc24781"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.254438 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.254470 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3aa591-f1f0-4264-a970-d8172cc24781-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.256866 4845 generic.go:334] "Generic (PLEG): container finished" podID="9868fb5b-b18e-42b0-8532-6e6a55da71d2" containerID="1ae234554f1bf061b9f12986d072427a2f50a26aa3b60b9c9cf12a5ccc0e8cce" exitCode=0 Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.257102 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hft5g" event={"ID":"9868fb5b-b18e-42b0-8532-6e6a55da71d2","Type":"ContainerDied","Data":"1ae234554f1bf061b9f12986d072427a2f50a26aa3b60b9c9cf12a5ccc0e8cce"} Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.262391 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7js4" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.262400 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7js4" event={"ID":"ae3aa591-f1f0-4264-a970-d8172cc24781","Type":"ContainerDied","Data":"a92252ab7199355c9918d53c338f871f76fb1b5cb5bdfbecf7537a0811f888ee"} Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.262735 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92252ab7199355c9918d53c338f871f76fb1b5cb5bdfbecf7537a0811f888ee" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.280636 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gxjbc" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.282002 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gxjbc" event={"ID":"1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b","Type":"ContainerDied","Data":"f5152a46482c33a388878313470bd17796ec05812413482c82f7e14ccb92881e"} Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.282038 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5152a46482c33a388878313470bd17796ec05812413482c82f7e14ccb92881e" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.409255 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c4f9db54b-5v9r8"] Feb 02 10:53:07 crc kubenswrapper[4845]: E0202 10:53:07.410095 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3aa591-f1f0-4264-a970-d8172cc24781" containerName="keystone-bootstrap" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.410117 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3aa591-f1f0-4264-a970-d8172cc24781" containerName="keystone-bootstrap" Feb 02 10:53:07 crc kubenswrapper[4845]: E0202 10:53:07.410147 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" containerName="placement-db-sync" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.410154 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" containerName="placement-db-sync" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.410344 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" containerName="placement-db-sync" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.410366 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3aa591-f1f0-4264-a970-d8172cc24781" containerName="keystone-bootstrap" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.411211 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.419947 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.420013 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.420367 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4r6h5" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.420506 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.420576 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.420704 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.427566 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c4f9db54b-5v9r8"] Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460070 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-internal-tls-certs\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460124 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kb9b\" (UniqueName: \"kubernetes.io/projected/61e42051-311d-4b4b-af17-e301351d9267-kube-api-access-2kb9b\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460190 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-scripts\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460191 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7fd6897c68-cspbg"] Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460359 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-credential-keys\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460393 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-fernet-keys\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460471 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-config-data\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460745 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-public-tls-certs\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.460943 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-combined-ca-bundle\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.462660 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.474270 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.474482 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.474596 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.474708 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.475436 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kzmx2" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.550678 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fd6897c68-cspbg"] Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.581965 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-config-data\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582067 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-scripts\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582097 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-config-data\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582219 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-public-tls-certs\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582292 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll5b8\" (UniqueName: \"kubernetes.io/projected/3231a338-4ba7-4851-9fd5-a7ba84f13089-kube-api-access-ll5b8\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582470 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-combined-ca-bundle\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582511 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3231a338-4ba7-4851-9fd5-a7ba84f13089-logs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582647 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-internal-tls-certs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582847 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-internal-tls-certs\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582899 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kb9b\" (UniqueName: \"kubernetes.io/projected/61e42051-311d-4b4b-af17-e301351d9267-kube-api-access-2kb9b\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.582934 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-combined-ca-bundle\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.583014 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-scripts\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.583052 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-public-tls-certs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.583103 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-credential-keys\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.583134 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-fernet-keys\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.594157 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-public-tls-certs\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.596590 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-combined-ca-bundle\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.597151 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-internal-tls-certs\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.599077 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-scripts\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.599600 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-fernet-keys\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.600226 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-config-data\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.603164 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61e42051-311d-4b4b-af17-e301351d9267-credential-keys\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.612424 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kb9b\" (UniqueName: \"kubernetes.io/projected/61e42051-311d-4b4b-af17-e301351d9267-kube-api-access-2kb9b\") pod \"keystone-c4f9db54b-5v9r8\" (UID: \"61e42051-311d-4b4b-af17-e301351d9267\") " pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.687735 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-combined-ca-bundle\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.687843 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-public-tls-certs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.688450 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-scripts\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.688481 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-config-data\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.688598 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll5b8\" (UniqueName: \"kubernetes.io/projected/3231a338-4ba7-4851-9fd5-a7ba84f13089-kube-api-access-ll5b8\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.689037 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3231a338-4ba7-4851-9fd5-a7ba84f13089-logs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.689848 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-internal-tls-certs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.695336 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3231a338-4ba7-4851-9fd5-a7ba84f13089-logs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.700400 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-scripts\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.704401 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-public-tls-certs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.706894 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-combined-ca-bundle\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.714542 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-config-data\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.717435 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-internal-tls-certs\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.730058 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll5b8\" (UniqueName: \"kubernetes.io/projected/3231a338-4ba7-4851-9fd5-a7ba84f13089-kube-api-access-ll5b8\") pod \"placement-7fd6897c68-cspbg\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.781505 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.808952 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68f64c64d8-r7nkx"] Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.811569 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.811728 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.823250 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68f64c64d8-r7nkx"] Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.903379 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-scripts\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.903434 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhrlx\" (UniqueName: \"kubernetes.io/projected/5978920a-e63d-4cb3-accd-4353fb398d50-kube-api-access-zhrlx\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.903485 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-internal-tls-certs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.903519 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-public-tls-certs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.903578 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-config-data\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.903632 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5978920a-e63d-4cb3-accd-4353fb398d50-logs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:07 crc kubenswrapper[4845]: I0202 10:53:07.903671 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-combined-ca-bundle\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.005281 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5978920a-e63d-4cb3-accd-4353fb398d50-logs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.005343 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-combined-ca-bundle\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.005388 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-scripts\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.005413 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhrlx\" (UniqueName: \"kubernetes.io/projected/5978920a-e63d-4cb3-accd-4353fb398d50-kube-api-access-zhrlx\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.005455 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-internal-tls-certs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.005489 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-public-tls-certs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.005550 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-config-data\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.006229 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5978920a-e63d-4cb3-accd-4353fb398d50-logs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.010079 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-combined-ca-bundle\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.011225 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-public-tls-certs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.014576 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-internal-tls-certs\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.017701 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-scripts\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.020923 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5978920a-e63d-4cb3-accd-4353fb398d50-config-data\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.022192 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhrlx\" (UniqueName: \"kubernetes.io/projected/5978920a-e63d-4cb3-accd-4353fb398d50-kube-api-access-zhrlx\") pod \"placement-68f64c64d8-r7nkx\" (UID: \"5978920a-e63d-4cb3-accd-4353fb398d50\") " pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.031741 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.035323 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kxrm5" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.095636 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.095914 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.107337 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-config-data\") pod \"250e18d9-cb14-4309-8d0c-fb341511dba6\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.107557 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-combined-ca-bundle\") pod \"250e18d9-cb14-4309-8d0c-fb341511dba6\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.108043 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjs7z\" (UniqueName: \"kubernetes.io/projected/250e18d9-cb14-4309-8d0c-fb341511dba6-kube-api-access-qjs7z\") pod \"250e18d9-cb14-4309-8d0c-fb341511dba6\" (UID: \"250e18d9-cb14-4309-8d0c-fb341511dba6\") " Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.175682 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250e18d9-cb14-4309-8d0c-fb341511dba6-kube-api-access-qjs7z" (OuterVolumeSpecName: "kube-api-access-qjs7z") pod "250e18d9-cb14-4309-8d0c-fb341511dba6" (UID: "250e18d9-cb14-4309-8d0c-fb341511dba6"). InnerVolumeSpecName "kube-api-access-qjs7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.191037 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "250e18d9-cb14-4309-8d0c-fb341511dba6" (UID: "250e18d9-cb14-4309-8d0c-fb341511dba6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.211754 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.211790 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjs7z\" (UniqueName: \"kubernetes.io/projected/250e18d9-cb14-4309-8d0c-fb341511dba6-kube-api-access-qjs7z\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.228948 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.242540 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.283072 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-config-data" (OuterVolumeSpecName: "config-data") pod "250e18d9-cb14-4309-8d0c-fb341511dba6" (UID: "250e18d9-cb14-4309-8d0c-fb341511dba6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.316465 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250e18d9-cb14-4309-8d0c-fb341511dba6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.323747 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba91fb37-4550-4684-99bb-45dba169a879","Type":"ContainerStarted","Data":"423aeacd820ec5c2d675794591067804e90d1f2a6923ef3a9f13012b659813bc"} Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.332078 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kxrm5" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.332138 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kxrm5" event={"ID":"250e18d9-cb14-4309-8d0c-fb341511dba6","Type":"ContainerDied","Data":"17906d788647666b2cd6e069e9338a17bf04443c927ea4ca63e22baf04dbc8dc"} Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.333387 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17906d788647666b2cd6e069e9338a17bf04443c927ea4ca63e22baf04dbc8dc" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.342184 4845 generic.go:334] "Generic (PLEG): container finished" podID="08802fb3-9897-4819-a38b-fe13e8892b47" containerID="3ed1c7658c68f0c8c4a3c24a82346d038eae78839b07b31f8eefe48619b71f5d" exitCode=0 Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.342412 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" event={"ID":"08802fb3-9897-4819-a38b-fe13e8892b47","Type":"ContainerDied","Data":"3ed1c7658c68f0c8c4a3c24a82346d038eae78839b07b31f8eefe48619b71f5d"} Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.343812 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.343830 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.453816 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c4f9db54b-5v9r8"] Feb 02 10:53:08 crc kubenswrapper[4845]: W0202 10:53:08.455114 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61e42051_311d_4b4b_af17_e301351d9267.slice/crio-6455add72597d00c7649f2b0ffe111a027f137f15effd389324caff9578aa249 WatchSource:0}: Error finding container 6455add72597d00c7649f2b0ffe111a027f137f15effd389324caff9578aa249: Status 404 returned error can't find the container with id 6455add72597d00c7649f2b0ffe111a027f137f15effd389324caff9578aa249 Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.842632 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68f64c64d8-r7nkx"] Feb 02 10:53:08 crc kubenswrapper[4845]: W0202 10:53:08.890275 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5978920a_e63d_4cb3_accd_4353fb398d50.slice/crio-e2d0394a7b810871bbdaa27e77b8a0f2bd4bd884a78bdd6ac816ce5bfbfab309 WatchSource:0}: Error finding container e2d0394a7b810871bbdaa27e77b8a0f2bd4bd884a78bdd6ac816ce5bfbfab309: Status 404 returned error can't find the container with id e2d0394a7b810871bbdaa27e77b8a0f2bd4bd884a78bdd6ac816ce5bfbfab309 Feb 02 10:53:08 crc kubenswrapper[4845]: I0202 10:53:08.990289 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fd6897c68-cspbg"] Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.158730 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.162456 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hft5g" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.288215 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjfj8\" (UniqueName: \"kubernetes.io/projected/9868fb5b-b18e-42b0-8532-6e6a55da71d2-kube-api-access-cjfj8\") pod \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.288537 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-svc\") pod \"08802fb3-9897-4819-a38b-fe13e8892b47\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.288602 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-sb\") pod \"08802fb3-9897-4819-a38b-fe13e8892b47\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.288906 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-combined-ca-bundle\") pod \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.288968 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-db-sync-config-data\") pod \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\" (UID: \"9868fb5b-b18e-42b0-8532-6e6a55da71d2\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.289029 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-swift-storage-0\") pod \"08802fb3-9897-4819-a38b-fe13e8892b47\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.289056 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-nb\") pod \"08802fb3-9897-4819-a38b-fe13e8892b47\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.289171 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-config\") pod \"08802fb3-9897-4819-a38b-fe13e8892b47\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.289210 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcx2l\" (UniqueName: \"kubernetes.io/projected/08802fb3-9897-4819-a38b-fe13e8892b47-kube-api-access-gcx2l\") pod \"08802fb3-9897-4819-a38b-fe13e8892b47\" (UID: \"08802fb3-9897-4819-a38b-fe13e8892b47\") " Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.312041 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9868fb5b-b18e-42b0-8532-6e6a55da71d2-kube-api-access-cjfj8" (OuterVolumeSpecName: "kube-api-access-cjfj8") pod "9868fb5b-b18e-42b0-8532-6e6a55da71d2" (UID: "9868fb5b-b18e-42b0-8532-6e6a55da71d2"). InnerVolumeSpecName "kube-api-access-cjfj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.332044 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9868fb5b-b18e-42b0-8532-6e6a55da71d2" (UID: "9868fb5b-b18e-42b0-8532-6e6a55da71d2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.332345 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08802fb3-9897-4819-a38b-fe13e8892b47-kube-api-access-gcx2l" (OuterVolumeSpecName: "kube-api-access-gcx2l") pod "08802fb3-9897-4819-a38b-fe13e8892b47" (UID: "08802fb3-9897-4819-a38b-fe13e8892b47"). InnerVolumeSpecName "kube-api-access-gcx2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.387858 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f64c64d8-r7nkx" event={"ID":"5978920a-e63d-4cb3-accd-4353fb398d50","Type":"ContainerStarted","Data":"a47a0f95f0ce06450c22e2ed7acfa09c8176f7a345b685a71ab421ddd8c6e97c"} Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.387939 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f64c64d8-r7nkx" event={"ID":"5978920a-e63d-4cb3-accd-4353fb398d50","Type":"ContainerStarted","Data":"e2d0394a7b810871bbdaa27e77b8a0f2bd4bd884a78bdd6ac816ce5bfbfab309"} Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.390796 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c4f9db54b-5v9r8" event={"ID":"61e42051-311d-4b4b-af17-e301351d9267","Type":"ContainerStarted","Data":"a171327ad4eaa29de568604671f066c8d6fe4c6e25dc2476d739955a7153396d"} Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.390840 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c4f9db54b-5v9r8" event={"ID":"61e42051-311d-4b4b-af17-e301351d9267","Type":"ContainerStarted","Data":"6455add72597d00c7649f2b0ffe111a027f137f15effd389324caff9578aa249"} Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.391870 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.392975 4845 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.393001 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcx2l\" (UniqueName: \"kubernetes.io/projected/08802fb3-9897-4819-a38b-fe13e8892b47-kube-api-access-gcx2l\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.393012 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjfj8\" (UniqueName: \"kubernetes.io/projected/9868fb5b-b18e-42b0-8532-6e6a55da71d2-kube-api-access-cjfj8\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.393124 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9868fb5b-b18e-42b0-8532-6e6a55da71d2" (UID: "9868fb5b-b18e-42b0-8532-6e6a55da71d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.396548 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" event={"ID":"08802fb3-9897-4819-a38b-fe13e8892b47","Type":"ContainerDied","Data":"990f2a82fd916ae58e607d4d041c7e0245ec85e42bc4953b989f05218c6f9e19"} Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.397438 4845 scope.go:117] "RemoveContainer" containerID="3ed1c7658c68f0c8c4a3c24a82346d038eae78839b07b31f8eefe48619b71f5d" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.397173 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sbm2k" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.402973 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hft5g" event={"ID":"9868fb5b-b18e-42b0-8532-6e6a55da71d2","Type":"ContainerDied","Data":"303b261fa422f333b504b71646d5d15f21982e8b9a4a8cba35c7d99137363acc"} Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.403140 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="303b261fa422f333b504b71646d5d15f21982e8b9a4a8cba35c7d99137363acc" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.403332 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hft5g" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.411171 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fd6897c68-cspbg" event={"ID":"3231a338-4ba7-4851-9fd5-a7ba84f13089","Type":"ContainerStarted","Data":"72291ce4c24275105ea624fdae6cb6154c6bb75e37f637d2f201663a1789f5ec"} Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.411245 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fd6897c68-cspbg" event={"ID":"3231a338-4ba7-4851-9fd5-a7ba84f13089","Type":"ContainerStarted","Data":"228de4e8bc7c765fb5d366d131a0a0268b9b4fac526b28962bf893b2beef69a2"} Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.420229 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c4f9db54b-5v9r8" podStartSLOduration=2.420205893 podStartE2EDuration="2.420205893s" podCreationTimestamp="2026-02-02 10:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:09.410694101 +0000 UTC m=+1270.502095561" watchObservedRunningTime="2026-02-02 10:53:09.420205893 +0000 UTC m=+1270.511607343" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.475736 4845 scope.go:117] "RemoveContainer" containerID="ad0d9ec1ecba0e033c137223df62f5b330d1ec35e21a3f0e070e5061487e39e2" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.500695 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9868fb5b-b18e-42b0-8532-6e6a55da71d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.557009 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08802fb3-9897-4819-a38b-fe13e8892b47" (UID: "08802fb3-9897-4819-a38b-fe13e8892b47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.592853 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08802fb3-9897-4819-a38b-fe13e8892b47" (UID: "08802fb3-9897-4819-a38b-fe13e8892b47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.608458 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.608507 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.613140 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "08802fb3-9897-4819-a38b-fe13e8892b47" (UID: "08802fb3-9897-4819-a38b-fe13e8892b47"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.619290 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-config" (OuterVolumeSpecName: "config") pod "08802fb3-9897-4819-a38b-fe13e8892b47" (UID: "08802fb3-9897-4819-a38b-fe13e8892b47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.662548 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08802fb3-9897-4819-a38b-fe13e8892b47" (UID: "08802fb3-9897-4819-a38b-fe13e8892b47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.746854 4845 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.746903 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.746920 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08802fb3-9897-4819-a38b-fe13e8892b47-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.898284 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-555888887b-mbz72"] Feb 02 10:53:09 crc kubenswrapper[4845]: E0202 10:53:09.898855 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08802fb3-9897-4819-a38b-fe13e8892b47" containerName="init" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.898872 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="08802fb3-9897-4819-a38b-fe13e8892b47" containerName="init" Feb 02 10:53:09 crc kubenswrapper[4845]: E0202 10:53:09.898920 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250e18d9-cb14-4309-8d0c-fb341511dba6" containerName="heat-db-sync" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.898931 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="250e18d9-cb14-4309-8d0c-fb341511dba6" containerName="heat-db-sync" Feb 02 10:53:09 crc kubenswrapper[4845]: E0202 10:53:09.898950 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08802fb3-9897-4819-a38b-fe13e8892b47" containerName="dnsmasq-dns" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.898958 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="08802fb3-9897-4819-a38b-fe13e8892b47" containerName="dnsmasq-dns" Feb 02 10:53:09 crc kubenswrapper[4845]: E0202 10:53:09.898990 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9868fb5b-b18e-42b0-8532-6e6a55da71d2" containerName="barbican-db-sync" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.898998 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="9868fb5b-b18e-42b0-8532-6e6a55da71d2" containerName="barbican-db-sync" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.899297 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="08802fb3-9897-4819-a38b-fe13e8892b47" containerName="dnsmasq-dns" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.899317 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="9868fb5b-b18e-42b0-8532-6e6a55da71d2" containerName="barbican-db-sync" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.899334 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="250e18d9-cb14-4309-8d0c-fb341511dba6" containerName="heat-db-sync" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.900823 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.906022 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-954bfc4f9-dfghw"] Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.915690 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.943955 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-555888887b-mbz72"] Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.954495 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-954bfc4f9-dfghw"] Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.957552 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.957797 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.957973 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rddwg" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.958128 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.985003 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4vk87"] Feb 02 10:53:09 crc kubenswrapper[4845]: I0202 10:53:09.987027 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.038035 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4vk87"] Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075292 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-combined-ca-bundle\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075346 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-config-data\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075373 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29622\" (UniqueName: \"kubernetes.io/projected/a9ec709e-f840-4ba0-b631-77038f9c5551-kube-api-access-29622\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075401 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-config-data\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075532 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dea749b-261a-4af3-979a-127dca4af07c-logs\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075559 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-config-data-custom\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075578 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-combined-ca-bundle\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075619 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-config-data-custom\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075649 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf56m\" (UniqueName: \"kubernetes.io/projected/6dea749b-261a-4af3-979a-127dca4af07c-kube-api-access-rf56m\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.075693 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9ec709e-f840-4ba0-b631-77038f9c5551-logs\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178492 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178564 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dea749b-261a-4af3-979a-127dca4af07c-logs\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178651 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-config-data-custom\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178713 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c85th\" (UniqueName: \"kubernetes.io/projected/1e98b98e-a993-4000-90f3-3372541369fb-kube-api-access-c85th\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178747 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-combined-ca-bundle\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178797 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-config-data-custom\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178828 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf56m\" (UniqueName: \"kubernetes.io/projected/6dea749b-261a-4af3-979a-127dca4af07c-kube-api-access-rf56m\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178859 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9ec709e-f840-4ba0-b631-77038f9c5551-logs\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178883 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-config\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178944 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-combined-ca-bundle\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178973 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.178999 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-config-data\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.179029 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29622\" (UniqueName: \"kubernetes.io/projected/a9ec709e-f840-4ba0-b631-77038f9c5551-kube-api-access-29622\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.179069 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-config-data\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.179101 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.179143 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.181193 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dea749b-261a-4af3-979a-127dca4af07c-logs\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.190848 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-config-data-custom\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.197842 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9ec709e-f840-4ba0-b631-77038f9c5551-logs\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.200420 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-config-data\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.203057 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-combined-ca-bundle\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.204667 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-config-data-custom\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.210346 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dea749b-261a-4af3-979a-127dca4af07c-config-data\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.216309 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ec709e-f840-4ba0-b631-77038f9c5551-combined-ca-bundle\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.219140 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29622\" (UniqueName: \"kubernetes.io/projected/a9ec709e-f840-4ba0-b631-77038f9c5551-kube-api-access-29622\") pod \"barbican-worker-954bfc4f9-dfghw\" (UID: \"a9ec709e-f840-4ba0-b631-77038f9c5551\") " pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.219252 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf56m\" (UniqueName: \"kubernetes.io/projected/6dea749b-261a-4af3-979a-127dca4af07c-kube-api-access-rf56m\") pod \"barbican-keystone-listener-555888887b-mbz72\" (UID: \"6dea749b-261a-4af3-979a-127dca4af07c\") " pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.229035 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c848b759d-9s78l"] Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.231028 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.233481 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.247060 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sbm2k"] Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.269091 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sbm2k"] Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.282474 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.282656 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.282741 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.282928 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.283080 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c85th\" (UniqueName: \"kubernetes.io/projected/1e98b98e-a993-4000-90f3-3372541369fb-kube-api-access-c85th\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.283269 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-config\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.284749 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.286560 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-config\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.288748 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.289954 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.290320 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.291509 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c848b759d-9s78l"] Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.326470 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c85th\" (UniqueName: \"kubernetes.io/projected/1e98b98e-a993-4000-90f3-3372541369fb-kube-api-access-c85th\") pod \"dnsmasq-dns-85ff748b95-4vk87\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.385433 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-logs\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.385490 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data-custom\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.385539 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn24r\" (UniqueName: \"kubernetes.io/projected/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-kube-api-access-fn24r\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.385603 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.386110 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-combined-ca-bundle\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.406293 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-555888887b-mbz72" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.448416 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f64c64d8-r7nkx" event={"ID":"5978920a-e63d-4cb3-accd-4353fb398d50","Type":"ContainerStarted","Data":"17b7917cedcfcfc5e227ee4c25eb904c009d07260f12e62f3d70f72ffa70598c"} Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.449842 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.449943 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.456987 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.457036 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.458570 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fd6897c68-cspbg" event={"ID":"3231a338-4ba7-4851-9fd5-a7ba84f13089","Type":"ContainerStarted","Data":"a21d26fcb519a4c746b991dcfeca12c3245ddc62427af655c6f5de2c40b04948"} Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.459203 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.459311 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.472857 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-954bfc4f9-dfghw" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.485688 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68f64c64d8-r7nkx" podStartSLOduration=3.485663823 podStartE2EDuration="3.485663823s" podCreationTimestamp="2026-02-02 10:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:10.470114599 +0000 UTC m=+1271.561516049" watchObservedRunningTime="2026-02-02 10:53:10.485663823 +0000 UTC m=+1271.577065273" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.489392 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-logs\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.489444 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data-custom\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.489468 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn24r\" (UniqueName: \"kubernetes.io/projected/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-kube-api-access-fn24r\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.489488 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.490391 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-logs\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.491781 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-combined-ca-bundle\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.494762 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data-custom\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.496847 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.512488 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-combined-ca-bundle\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.517735 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7fd6897c68-cspbg" podStartSLOduration=3.517710368 podStartE2EDuration="3.517710368s" podCreationTimestamp="2026-02-02 10:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:10.512994184 +0000 UTC m=+1271.604395634" watchObservedRunningTime="2026-02-02 10:53:10.517710368 +0000 UTC m=+1271.609111818" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.522959 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn24r\" (UniqueName: \"kubernetes.io/projected/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-kube-api-access-fn24r\") pod \"barbican-api-c848b759d-9s78l\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.566254 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.566612 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.566647 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.566659 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.627795 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.647009 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.693437 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 10:53:10 crc kubenswrapper[4845]: I0202 10:53:10.698062 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.221861 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-555888887b-mbz72"] Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.307105 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-954bfc4f9-dfghw"] Feb 02 10:53:11 crc kubenswrapper[4845]: W0202 10:53:11.321448 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9ec709e_f840_4ba0_b631_77038f9c5551.slice/crio-3016059426dd7a405f81151db970d25366ab2fc6bef18710c9c470df08f3648b WatchSource:0}: Error finding container 3016059426dd7a405f81151db970d25366ab2fc6bef18710c9c470df08f3648b: Status 404 returned error can't find the container with id 3016059426dd7a405f81151db970d25366ab2fc6bef18710c9c470df08f3648b Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.404036 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4vk87"] Feb 02 10:53:11 crc kubenswrapper[4845]: W0202 10:53:11.411171 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e98b98e_a993_4000_90f3_3372541369fb.slice/crio-46aea51eef4e5bca23196251795de36a78dad7c530d09f830de8c2b61b899f53 WatchSource:0}: Error finding container 46aea51eef4e5bca23196251795de36a78dad7c530d09f830de8c2b61b899f53: Status 404 returned error can't find the container with id 46aea51eef4e5bca23196251795de36a78dad7c530d09f830de8c2b61b899f53 Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.414463 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c848b759d-9s78l"] Feb 02 10:53:11 crc kubenswrapper[4845]: W0202 10:53:11.427649 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ddd7b77_b40e_4cbd_bce4_eecb7b7eae98.slice/crio-9e1e2aebf9151dc6f3a68108d64e5db12d097d08be7641fdb5a06d1241111d90 WatchSource:0}: Error finding container 9e1e2aebf9151dc6f3a68108d64e5db12d097d08be7641fdb5a06d1241111d90: Status 404 returned error can't find the container with id 9e1e2aebf9151dc6f3a68108d64e5db12d097d08be7641fdb5a06d1241111d90 Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.483840 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-954bfc4f9-dfghw" event={"ID":"a9ec709e-f840-4ba0-b631-77038f9c5551","Type":"ContainerStarted","Data":"3016059426dd7a405f81151db970d25366ab2fc6bef18710c9c470df08f3648b"} Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.493928 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555888887b-mbz72" event={"ID":"6dea749b-261a-4af3-979a-127dca4af07c","Type":"ContainerStarted","Data":"676383dd39e311295ba943d078e853f42f0ecda45d380800c574ef6a2d9dae3e"} Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.498730 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" event={"ID":"1e98b98e-a993-4000-90f3-3372541369fb","Type":"ContainerStarted","Data":"46aea51eef4e5bca23196251795de36a78dad7c530d09f830de8c2b61b899f53"} Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.503269 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c848b759d-9s78l" event={"ID":"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98","Type":"ContainerStarted","Data":"9e1e2aebf9151dc6f3a68108d64e5db12d097d08be7641fdb5a06d1241111d90"} Feb 02 10:53:11 crc kubenswrapper[4845]: I0202 10:53:11.743394 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08802fb3-9897-4819-a38b-fe13e8892b47" path="/var/lib/kubelet/pods/08802fb3-9897-4819-a38b-fe13e8892b47/volumes" Feb 02 10:53:12 crc kubenswrapper[4845]: I0202 10:53:12.529461 4845 generic.go:334] "Generic (PLEG): container finished" podID="1e98b98e-a993-4000-90f3-3372541369fb" containerID="a506c6cd7f566d8573c6029c7963e6c4b4d345abfac828b450e7916b80814864" exitCode=0 Feb 02 10:53:12 crc kubenswrapper[4845]: I0202 10:53:12.530082 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" event={"ID":"1e98b98e-a993-4000-90f3-3372541369fb","Type":"ContainerDied","Data":"a506c6cd7f566d8573c6029c7963e6c4b4d345abfac828b450e7916b80814864"} Feb 02 10:53:12 crc kubenswrapper[4845]: I0202 10:53:12.566821 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c848b759d-9s78l" event={"ID":"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98","Type":"ContainerStarted","Data":"176473ee3218eaa7c0eebfe1c20972c9902b004b5527eec40d3ad45e506fbd6b"} Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.361400 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8448c87f86-gdg49"] Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.389035 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.393016 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.393588 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.430298 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8448c87f86-gdg49"] Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.456249 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5smqk\" (UniqueName: \"kubernetes.io/projected/4d926fea-dae3-4818-a608-4d9fa52abef5-kube-api-access-5smqk\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.456372 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d926fea-dae3-4818-a608-4d9fa52abef5-logs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.456421 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-config-data-custom\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.456465 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-config-data\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.456526 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-public-tls-certs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.457098 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-combined-ca-bundle\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.457187 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-internal-tls-certs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.560591 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-combined-ca-bundle\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.560653 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-internal-tls-certs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.560700 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5smqk\" (UniqueName: \"kubernetes.io/projected/4d926fea-dae3-4818-a608-4d9fa52abef5-kube-api-access-5smqk\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.560748 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d926fea-dae3-4818-a608-4d9fa52abef5-logs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.560779 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-config-data-custom\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.560812 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-config-data\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.560848 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-public-tls-certs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.576624 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d926fea-dae3-4818-a608-4d9fa52abef5-logs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.581043 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" event={"ID":"1e98b98e-a993-4000-90f3-3372541369fb","Type":"ContainerStarted","Data":"d65ee280aa67a3b9d4ada8a3d8e251139871dc36b778ac679ad93aaaa994de0c"} Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.581204 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.584810 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c848b759d-9s78l" event={"ID":"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98","Type":"ContainerStarted","Data":"b43d9c1e2c65f2ee90a2f57f9bbc1d12c9cf5e6cdd4115a797d97d1794b7caa6"} Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.585099 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.585127 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.585556 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-combined-ca-bundle\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.587089 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-config-data-custom\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.596346 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-public-tls-certs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.597115 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5smqk\" (UniqueName: \"kubernetes.io/projected/4d926fea-dae3-4818-a608-4d9fa52abef5-kube-api-access-5smqk\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.597211 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-config-data\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.597336 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d926fea-dae3-4818-a608-4d9fa52abef5-internal-tls-certs\") pod \"barbican-api-8448c87f86-gdg49\" (UID: \"4d926fea-dae3-4818-a608-4d9fa52abef5\") " pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.607363 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" podStartSLOduration=4.607337913 podStartE2EDuration="4.607337913s" podCreationTimestamp="2026-02-02 10:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:13.603843884 +0000 UTC m=+1274.695245334" watchObservedRunningTime="2026-02-02 10:53:13.607337913 +0000 UTC m=+1274.698739363" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.635329 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c848b759d-9s78l" podStartSLOduration=3.635271041 podStartE2EDuration="3.635271041s" podCreationTimestamp="2026-02-02 10:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:13.628141887 +0000 UTC m=+1274.719543337" watchObservedRunningTime="2026-02-02 10:53:13.635271041 +0000 UTC m=+1274.726672501" Feb 02 10:53:13 crc kubenswrapper[4845]: I0202 10:53:13.746216 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:14 crc kubenswrapper[4845]: I0202 10:53:14.320333 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8448c87f86-gdg49"] Feb 02 10:53:15 crc kubenswrapper[4845]: I0202 10:53:15.620471 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448c87f86-gdg49" event={"ID":"4d926fea-dae3-4818-a608-4d9fa52abef5","Type":"ContainerStarted","Data":"4219ffb364f9dc6fa9c34761a8a90010efdcbb6c1a8369e6dcfa7bce1ab1673d"} Feb 02 10:53:16 crc kubenswrapper[4845]: I0202 10:53:16.238098 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:53:16 crc kubenswrapper[4845]: I0202 10:53:16.238608 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:53:17 crc kubenswrapper[4845]: I0202 10:53:17.197320 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 10:53:17 crc kubenswrapper[4845]: I0202 10:53:17.198243 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 10:53:17 crc kubenswrapper[4845]: I0202 10:53:17.215275 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 10:53:17 crc kubenswrapper[4845]: I0202 10:53:17.215419 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:53:17 crc kubenswrapper[4845]: I0202 10:53:17.273243 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 10:53:20 crc kubenswrapper[4845]: I0202 10:53:20.631772 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:20 crc kubenswrapper[4845]: I0202 10:53:20.720227 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mrvw8"] Feb 02 10:53:20 crc kubenswrapper[4845]: I0202 10:53:20.720475 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" podUID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerName="dnsmasq-dns" containerID="cri-o://b19033b665f9424ff6ceb80860a3f821b9318c6ff803e1c8ca9ff4c4824c2a24" gracePeriod=10 Feb 02 10:53:21 crc kubenswrapper[4845]: I0202 10:53:21.749181 4845 generic.go:334] "Generic (PLEG): container finished" podID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerID="b19033b665f9424ff6ceb80860a3f821b9318c6ff803e1c8ca9ff4c4824c2a24" exitCode=0 Feb 02 10:53:21 crc kubenswrapper[4845]: I0202 10:53:21.749509 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" event={"ID":"33b0a7fa-f66e-470e-95a3-a110ecec168b","Type":"ContainerDied","Data":"b19033b665f9424ff6ceb80860a3f821b9318c6ff803e1c8ca9ff4c4824c2a24"} Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.139643 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.187378 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-sb\") pod \"33b0a7fa-f66e-470e-95a3-a110ecec168b\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.187459 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tnwb\" (UniqueName: \"kubernetes.io/projected/33b0a7fa-f66e-470e-95a3-a110ecec168b-kube-api-access-2tnwb\") pod \"33b0a7fa-f66e-470e-95a3-a110ecec168b\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.187546 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-svc\") pod \"33b0a7fa-f66e-470e-95a3-a110ecec168b\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.187597 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-nb\") pod \"33b0a7fa-f66e-470e-95a3-a110ecec168b\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.187668 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-swift-storage-0\") pod \"33b0a7fa-f66e-470e-95a3-a110ecec168b\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.187790 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-config\") pod \"33b0a7fa-f66e-470e-95a3-a110ecec168b\" (UID: \"33b0a7fa-f66e-470e-95a3-a110ecec168b\") " Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.211429 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b0a7fa-f66e-470e-95a3-a110ecec168b-kube-api-access-2tnwb" (OuterVolumeSpecName: "kube-api-access-2tnwb") pod "33b0a7fa-f66e-470e-95a3-a110ecec168b" (UID: "33b0a7fa-f66e-470e-95a3-a110ecec168b"). InnerVolumeSpecName "kube-api-access-2tnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.279774 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33b0a7fa-f66e-470e-95a3-a110ecec168b" (UID: "33b0a7fa-f66e-470e-95a3-a110ecec168b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.282212 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33b0a7fa-f66e-470e-95a3-a110ecec168b" (UID: "33b0a7fa-f66e-470e-95a3-a110ecec168b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.286935 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-config" (OuterVolumeSpecName: "config") pod "33b0a7fa-f66e-470e-95a3-a110ecec168b" (UID: "33b0a7fa-f66e-470e-95a3-a110ecec168b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.290998 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tnwb\" (UniqueName: \"kubernetes.io/projected/33b0a7fa-f66e-470e-95a3-a110ecec168b-kube-api-access-2tnwb\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.291024 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.291034 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.291043 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.296483 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "33b0a7fa-f66e-470e-95a3-a110ecec168b" (UID: "33b0a7fa-f66e-470e-95a3-a110ecec168b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.301361 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33b0a7fa-f66e-470e-95a3-a110ecec168b" (UID: "33b0a7fa-f66e-470e-95a3-a110ecec168b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:22 crc kubenswrapper[4845]: E0202 10:53:22.359993 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ba91fb37-4550-4684-99bb-45dba169a879" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.393723 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.393756 4845 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33b0a7fa-f66e-470e-95a3-a110ecec168b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.419953 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.501988 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.773481 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.773779 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" event={"ID":"33b0a7fa-f66e-470e-95a3-a110ecec168b","Type":"ContainerDied","Data":"24bfd21204b757a912a5fde09fd51c09b7d4dece5bb10e7fce06a7581904b6df"} Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.775005 4845 scope.go:117] "RemoveContainer" containerID="b19033b665f9424ff6ceb80860a3f821b9318c6ff803e1c8ca9ff4c4824c2a24" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.798719 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="ceilometer-notification-agent" containerID="cri-o://4c795b185512cbaa08a41087a290ab376088c31c6277e1a8f0ee3f21dc22200f" gracePeriod=30 Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.799035 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba91fb37-4550-4684-99bb-45dba169a879","Type":"ContainerStarted","Data":"7f93106b71edc6fc8f88297c4c620682fa2e4fe0213e9e1cad77a132eee7f48a"} Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.799089 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.799474 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="proxy-httpd" containerID="cri-o://7f93106b71edc6fc8f88297c4c620682fa2e4fe0213e9e1cad77a132eee7f48a" gracePeriod=30 Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.799544 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="sg-core" containerID="cri-o://423aeacd820ec5c2d675794591067804e90d1f2a6923ef3a9f13012b659813bc" gracePeriod=30 Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.809541 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555888887b-mbz72" event={"ID":"6dea749b-261a-4af3-979a-127dca4af07c","Type":"ContainerStarted","Data":"575ab7f2f7fe4e8106c2466a57472a57241cdaf710df402b30f388c226326ecb"} Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.809587 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-555888887b-mbz72" event={"ID":"6dea749b-261a-4af3-979a-127dca4af07c","Type":"ContainerStarted","Data":"e4fdb2540fa2b5886f2833bca4f1069d1b8cb2bad984df9fdfc72a28d64bea6a"} Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.832774 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448c87f86-gdg49" event={"ID":"4d926fea-dae3-4818-a608-4d9fa52abef5","Type":"ContainerStarted","Data":"275e844bce010a7a4a656fff83cd8ffa88fbe0966b827ef884820dbb3e644a13"} Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.833204 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.833814 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.837282 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-954bfc4f9-dfghw" event={"ID":"a9ec709e-f840-4ba0-b631-77038f9c5551","Type":"ContainerStarted","Data":"73599b0ac8c799feeb156821205d5b156f8e7048f85c646545007104aad91fbc"} Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.837327 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-954bfc4f9-dfghw" event={"ID":"a9ec709e-f840-4ba0-b631-77038f9c5551","Type":"ContainerStarted","Data":"977364fb0517d6e9dae517c49539321a86f1f58be0fb4dd298f952ce75d3729c"} Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.864695 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-555888887b-mbz72" podStartSLOduration=3.383382898 podStartE2EDuration="13.864666601s" podCreationTimestamp="2026-02-02 10:53:09 +0000 UTC" firstStartedPulling="2026-02-02 10:53:11.18846511 +0000 UTC m=+1272.279866570" lastFinishedPulling="2026-02-02 10:53:21.669748823 +0000 UTC m=+1282.761150273" observedRunningTime="2026-02-02 10:53:22.845595436 +0000 UTC m=+1283.936996886" watchObservedRunningTime="2026-02-02 10:53:22.864666601 +0000 UTC m=+1283.956068051" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.922908 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8448c87f86-gdg49" podStartSLOduration=9.922875173 podStartE2EDuration="9.922875173s" podCreationTimestamp="2026-02-02 10:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:22.873930255 +0000 UTC m=+1283.965331705" watchObservedRunningTime="2026-02-02 10:53:22.922875173 +0000 UTC m=+1284.014276633" Feb 02 10:53:22 crc kubenswrapper[4845]: I0202 10:53:22.962326 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-954bfc4f9-dfghw" podStartSLOduration=3.618827651 podStartE2EDuration="13.962303789s" podCreationTimestamp="2026-02-02 10:53:09 +0000 UTC" firstStartedPulling="2026-02-02 10:53:11.326217473 +0000 UTC m=+1272.417618923" lastFinishedPulling="2026-02-02 10:53:21.669693611 +0000 UTC m=+1282.761095061" observedRunningTime="2026-02-02 10:53:22.903083798 +0000 UTC m=+1283.994485248" watchObservedRunningTime="2026-02-02 10:53:22.962303789 +0000 UTC m=+1284.053705239" Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.021122 4845 scope.go:117] "RemoveContainer" containerID="720ce281ce34166ec16d662e6b2cfb3e73984fcbc45102640b4bba9183262c78" Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.024099 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mrvw8"] Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.034090 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mrvw8"] Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.730844 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b0a7fa-f66e-470e-95a3-a110ecec168b" path="/var/lib/kubelet/pods/33b0a7fa-f66e-470e-95a3-a110ecec168b/volumes" Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.916943 4845 generic.go:334] "Generic (PLEG): container finished" podID="ba91fb37-4550-4684-99bb-45dba169a879" containerID="7f93106b71edc6fc8f88297c4c620682fa2e4fe0213e9e1cad77a132eee7f48a" exitCode=0 Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.916988 4845 generic.go:334] "Generic (PLEG): container finished" podID="ba91fb37-4550-4684-99bb-45dba169a879" containerID="423aeacd820ec5c2d675794591067804e90d1f2a6923ef3a9f13012b659813bc" exitCode=2 Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.917001 4845 generic.go:334] "Generic (PLEG): container finished" podID="ba91fb37-4550-4684-99bb-45dba169a879" containerID="4c795b185512cbaa08a41087a290ab376088c31c6277e1a8f0ee3f21dc22200f" exitCode=0 Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.917027 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba91fb37-4550-4684-99bb-45dba169a879","Type":"ContainerDied","Data":"7f93106b71edc6fc8f88297c4c620682fa2e4fe0213e9e1cad77a132eee7f48a"} Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.917146 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba91fb37-4550-4684-99bb-45dba169a879","Type":"ContainerDied","Data":"423aeacd820ec5c2d675794591067804e90d1f2a6923ef3a9f13012b659813bc"} Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.917167 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba91fb37-4550-4684-99bb-45dba169a879","Type":"ContainerDied","Data":"4c795b185512cbaa08a41087a290ab376088c31c6277e1a8f0ee3f21dc22200f"} Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.917178 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba91fb37-4550-4684-99bb-45dba169a879","Type":"ContainerDied","Data":"320ae8c7183d27e383573443ad820bda2273505766b721abc86327e40604e0ca"} Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.917205 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320ae8c7183d27e383573443ad820bda2273505766b721abc86327e40604e0ca" Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.925656 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g8b4r" event={"ID":"183b0ef9-490f-43a1-a464-2bd64a820ebd","Type":"ContainerStarted","Data":"b6c0a29825672ec889a1c4e9480e6e2959d05e730a7944a0c2ac39bff41e3be4"} Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.938319 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.948414 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8448c87f86-gdg49" event={"ID":"4d926fea-dae3-4818-a608-4d9fa52abef5","Type":"ContainerStarted","Data":"052db1f414ef06472a20f02fff0ddd12feda87931a4d2148b7c9490e7c8e3c47"} Feb 02 10:53:23 crc kubenswrapper[4845]: I0202 10:53:23.949721 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-g8b4r" podStartSLOduration=5.241277522 podStartE2EDuration="58.949698981s" podCreationTimestamp="2026-02-02 10:52:25 +0000 UTC" firstStartedPulling="2026-02-02 10:52:27.962034974 +0000 UTC m=+1229.053436424" lastFinishedPulling="2026-02-02 10:53:21.670456433 +0000 UTC m=+1282.761857883" observedRunningTime="2026-02-02 10:53:23.949513026 +0000 UTC m=+1285.040914476" watchObservedRunningTime="2026-02-02 10:53:23.949698981 +0000 UTC m=+1285.041100431" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.054550 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-config-data\") pod \"ba91fb37-4550-4684-99bb-45dba169a879\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.054923 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-combined-ca-bundle\") pod \"ba91fb37-4550-4684-99bb-45dba169a879\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.055112 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-sg-core-conf-yaml\") pod \"ba91fb37-4550-4684-99bb-45dba169a879\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.055148 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-scripts\") pod \"ba91fb37-4550-4684-99bb-45dba169a879\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.055299 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-log-httpd\") pod \"ba91fb37-4550-4684-99bb-45dba169a879\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.055358 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g7v5\" (UniqueName: \"kubernetes.io/projected/ba91fb37-4550-4684-99bb-45dba169a879-kube-api-access-4g7v5\") pod \"ba91fb37-4550-4684-99bb-45dba169a879\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.055381 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-run-httpd\") pod \"ba91fb37-4550-4684-99bb-45dba169a879\" (UID: \"ba91fb37-4550-4684-99bb-45dba169a879\") " Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.056050 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ba91fb37-4550-4684-99bb-45dba169a879" (UID: "ba91fb37-4550-4684-99bb-45dba169a879"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.056252 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ba91fb37-4550-4684-99bb-45dba169a879" (UID: "ba91fb37-4550-4684-99bb-45dba169a879"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.056913 4845 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.056937 4845 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba91fb37-4550-4684-99bb-45dba169a879-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.066180 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba91fb37-4550-4684-99bb-45dba169a879-kube-api-access-4g7v5" (OuterVolumeSpecName: "kube-api-access-4g7v5") pod "ba91fb37-4550-4684-99bb-45dba169a879" (UID: "ba91fb37-4550-4684-99bb-45dba169a879"). InnerVolumeSpecName "kube-api-access-4g7v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.086848 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-scripts" (OuterVolumeSpecName: "scripts") pod "ba91fb37-4550-4684-99bb-45dba169a879" (UID: "ba91fb37-4550-4684-99bb-45dba169a879"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.092984 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ba91fb37-4550-4684-99bb-45dba169a879" (UID: "ba91fb37-4550-4684-99bb-45dba169a879"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.160373 4845 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.160413 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.160422 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g7v5\" (UniqueName: \"kubernetes.io/projected/ba91fb37-4550-4684-99bb-45dba169a879-kube-api-access-4g7v5\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.163052 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba91fb37-4550-4684-99bb-45dba169a879" (UID: "ba91fb37-4550-4684-99bb-45dba169a879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.214471 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-config-data" (OuterVolumeSpecName: "config-data") pod "ba91fb37-4550-4684-99bb-45dba169a879" (UID: "ba91fb37-4550-4684-99bb-45dba169a879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.263026 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.263064 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba91fb37-4550-4684-99bb-45dba169a879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:24 crc kubenswrapper[4845]: I0202 10:53:24.959910 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.035324 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.050576 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.063788 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:53:25 crc kubenswrapper[4845]: E0202 10:53:25.064426 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="sg-core" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064453 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="sg-core" Feb 02 10:53:25 crc kubenswrapper[4845]: E0202 10:53:25.064490 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="proxy-httpd" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064501 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="proxy-httpd" Feb 02 10:53:25 crc kubenswrapper[4845]: E0202 10:53:25.064524 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerName="dnsmasq-dns" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064534 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerName="dnsmasq-dns" Feb 02 10:53:25 crc kubenswrapper[4845]: E0202 10:53:25.064563 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="ceilometer-notification-agent" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064573 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="ceilometer-notification-agent" Feb 02 10:53:25 crc kubenswrapper[4845]: E0202 10:53:25.064589 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerName="init" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064598 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerName="init" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064914 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerName="dnsmasq-dns" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064939 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="ceilometer-notification-agent" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064956 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="sg-core" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.064973 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba91fb37-4550-4684-99bb-45dba169a879" containerName="proxy-httpd" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.067187 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.070421 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.070769 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.085646 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.185214 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-run-httpd\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.185279 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z6ch\" (UniqueName: \"kubernetes.io/projected/e4430b5f-6421-41e2-b338-3b215c57957a-kube-api-access-7z6ch\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.185316 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.185341 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-config-data\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.185378 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-log-httpd\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.185435 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-scripts\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.185460 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.287069 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-run-httpd\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.287139 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z6ch\" (UniqueName: \"kubernetes.io/projected/e4430b5f-6421-41e2-b338-3b215c57957a-kube-api-access-7z6ch\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.287190 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.287230 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-config-data\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.287760 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-run-httpd\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.288100 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-log-httpd\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.288348 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-scripts\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.288427 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-log-httpd\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.288728 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.292180 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-scripts\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.292514 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.293483 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.294720 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-config-data\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.315204 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z6ch\" (UniqueName: \"kubernetes.io/projected/e4430b5f-6421-41e2-b338-3b215c57957a-kube-api-access-7z6ch\") pod \"ceilometer-0\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.399735 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:53:25 crc kubenswrapper[4845]: I0202 10:53:25.729320 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba91fb37-4550-4684-99bb-45dba169a879" path="/var/lib/kubelet/pods/ba91fb37-4550-4684-99bb-45dba169a879/volumes" Feb 02 10:53:26 crc kubenswrapper[4845]: I0202 10:53:26.027647 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:53:26 crc kubenswrapper[4845]: I0202 10:53:26.992959 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerStarted","Data":"fbe9c7e888f7c781c1053770c1b8f9cfc807348795f8c409217a85d5272ec120"} Feb 02 10:53:26 crc kubenswrapper[4845]: I0202 10:53:26.993268 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerStarted","Data":"253fc8db05e07f1a530e09a4e9ff070908466c9701a4e2cca1a5c237104581b4"} Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.081148 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-mrvw8" podUID="33b0a7fa-f66e-470e-95a3-a110ecec168b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.195:5353: i/o timeout" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.271699 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.569337 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6898599c95-65qmn"] Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.574423 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6898599c95-65qmn" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-api" containerID="cri-o://c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618" gracePeriod=30 Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.574999 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6898599c95-65qmn" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-httpd" containerID="cri-o://9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d" gracePeriod=30 Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.618665 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b8db7b6ff-lx6zl"] Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.620877 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.654010 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b8db7b6ff-lx6zl"] Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.686737 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6898599c95-65qmn" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.197:9696/\": read tcp 10.217.0.2:37340->10.217.0.197:9696: read: connection reset by peer" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.763701 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-combined-ca-bundle\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.763804 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr7mj\" (UniqueName: \"kubernetes.io/projected/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-kube-api-access-vr7mj\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.763839 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-httpd-config\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.764074 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-ovndb-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.764171 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-config\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.764261 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-public-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.764344 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-internal-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.866215 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-ovndb-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.866276 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-config\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.866319 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-public-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.866373 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-internal-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.866414 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-combined-ca-bundle\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.866469 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr7mj\" (UniqueName: \"kubernetes.io/projected/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-kube-api-access-vr7mj\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.866486 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-httpd-config\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.872103 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-internal-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.872134 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-public-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.872853 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-ovndb-tls-certs\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.873212 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-config\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.874418 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-httpd-config\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.877301 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-combined-ca-bundle\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.886508 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr7mj\" (UniqueName: \"kubernetes.io/projected/7b4befc3-7f3f-4813-9c5e-9fac28d60f72-kube-api-access-vr7mj\") pod \"neutron-5b8db7b6ff-lx6zl\" (UID: \"7b4befc3-7f3f-4813-9c5e-9fac28d60f72\") " pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:27 crc kubenswrapper[4845]: I0202 10:53:27.948903 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:28 crc kubenswrapper[4845]: I0202 10:53:28.010740 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerStarted","Data":"cf0af5c5e3f564b6556befcc6a6f25d0bf34be4299c282d46a9c4ef4ba6b6020"} Feb 02 10:53:28 crc kubenswrapper[4845]: I0202 10:53:28.013711 4845 generic.go:334] "Generic (PLEG): container finished" podID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerID="9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d" exitCode=0 Feb 02 10:53:28 crc kubenswrapper[4845]: I0202 10:53:28.013749 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6898599c95-65qmn" event={"ID":"0d345372-d7c4-4094-b9cb-e2afbd2dbf54","Type":"ContainerDied","Data":"9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d"} Feb 02 10:53:28 crc kubenswrapper[4845]: I0202 10:53:28.587472 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b8db7b6ff-lx6zl"] Feb 02 10:53:29 crc kubenswrapper[4845]: I0202 10:53:29.030534 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerStarted","Data":"667445270cb8d9e7bc55a98bf683b24cb0b8dad3cdc9fba27ac254a584303435"} Feb 02 10:53:29 crc kubenswrapper[4845]: I0202 10:53:29.036786 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b8db7b6ff-lx6zl" event={"ID":"7b4befc3-7f3f-4813-9c5e-9fac28d60f72","Type":"ContainerStarted","Data":"010d122d3c23baef91e81ddd89b34876f952d908b5e75ad031480c0b8b37cfcf"} Feb 02 10:53:29 crc kubenswrapper[4845]: I0202 10:53:29.036868 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b8db7b6ff-lx6zl" event={"ID":"7b4befc3-7f3f-4813-9c5e-9fac28d60f72","Type":"ContainerStarted","Data":"5b48343b7812c2a915ac1190e7ed3c58017b026297b081721f298a101490067a"} Feb 02 10:53:29 crc kubenswrapper[4845]: I0202 10:53:29.434045 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6898599c95-65qmn" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.197:9696/\": dial tcp 10.217.0.197:9696: connect: connection refused" Feb 02 10:53:30 crc kubenswrapper[4845]: I0202 10:53:30.070372 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b8db7b6ff-lx6zl" event={"ID":"7b4befc3-7f3f-4813-9c5e-9fac28d60f72","Type":"ContainerStarted","Data":"e716c03f22b8b84ac51f1230761a01d739c457d9b43950e32626c7ae7a66172f"} Feb 02 10:53:30 crc kubenswrapper[4845]: I0202 10:53:30.072025 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:30 crc kubenswrapper[4845]: I0202 10:53:30.824849 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:30 crc kubenswrapper[4845]: I0202 10:53:30.850845 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b8db7b6ff-lx6zl" podStartSLOduration=3.850801382 podStartE2EDuration="3.850801382s" podCreationTimestamp="2026-02-02 10:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:30.103297199 +0000 UTC m=+1291.194698649" watchObservedRunningTime="2026-02-02 10:53:30.850801382 +0000 UTC m=+1291.942202852" Feb 02 10:53:30 crc kubenswrapper[4845]: I0202 10:53:30.879614 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8448c87f86-gdg49" Feb 02 10:53:30 crc kubenswrapper[4845]: I0202 10:53:30.939297 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-c848b759d-9s78l"] Feb 02 10:53:30 crc kubenswrapper[4845]: I0202 10:53:30.939563 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-c848b759d-9s78l" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerName="barbican-api-log" containerID="cri-o://176473ee3218eaa7c0eebfe1c20972c9902b004b5527eec40d3ad45e506fbd6b" gracePeriod=30 Feb 02 10:53:30 crc kubenswrapper[4845]: I0202 10:53:30.939713 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-c848b759d-9s78l" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerName="barbican-api" containerID="cri-o://b43d9c1e2c65f2ee90a2f57f9bbc1d12c9cf5e6cdd4115a797d97d1794b7caa6" gracePeriod=30 Feb 02 10:53:31 crc kubenswrapper[4845]: I0202 10:53:31.094161 4845 generic.go:334] "Generic (PLEG): container finished" podID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerID="176473ee3218eaa7c0eebfe1c20972c9902b004b5527eec40d3ad45e506fbd6b" exitCode=143 Feb 02 10:53:31 crc kubenswrapper[4845]: I0202 10:53:31.094521 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c848b759d-9s78l" event={"ID":"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98","Type":"ContainerDied","Data":"176473ee3218eaa7c0eebfe1c20972c9902b004b5527eec40d3ad45e506fbd6b"} Feb 02 10:53:31 crc kubenswrapper[4845]: I0202 10:53:31.109837 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerStarted","Data":"8728f9654023c3d16f466f99002e731e63223d9431b4fa5e2833683a43d715a8"} Feb 02 10:53:31 crc kubenswrapper[4845]: I0202 10:53:31.110492 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:53:31 crc kubenswrapper[4845]: I0202 10:53:31.134508 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.760753242 podStartE2EDuration="6.134487722s" podCreationTimestamp="2026-02-02 10:53:25 +0000 UTC" firstStartedPulling="2026-02-02 10:53:26.038426008 +0000 UTC m=+1287.129827458" lastFinishedPulling="2026-02-02 10:53:30.412160498 +0000 UTC m=+1291.503561938" observedRunningTime="2026-02-02 10:53:31.133020851 +0000 UTC m=+1292.224422311" watchObservedRunningTime="2026-02-02 10:53:31.134487722 +0000 UTC m=+1292.225889172" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.062378 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.122042 4845 generic.go:334] "Generic (PLEG): container finished" podID="183b0ef9-490f-43a1-a464-2bd64a820ebd" containerID="b6c0a29825672ec889a1c4e9480e6e2959d05e730a7944a0c2ac39bff41e3be4" exitCode=0 Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.122114 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g8b4r" event={"ID":"183b0ef9-490f-43a1-a464-2bd64a820ebd","Type":"ContainerDied","Data":"b6c0a29825672ec889a1c4e9480e6e2959d05e730a7944a0c2ac39bff41e3be4"} Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.128725 4845 generic.go:334] "Generic (PLEG): container finished" podID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerID="c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618" exitCode=0 Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.128774 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6898599c95-65qmn" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.128855 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6898599c95-65qmn" event={"ID":"0d345372-d7c4-4094-b9cb-e2afbd2dbf54","Type":"ContainerDied","Data":"c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618"} Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.129027 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6898599c95-65qmn" event={"ID":"0d345372-d7c4-4094-b9cb-e2afbd2dbf54","Type":"ContainerDied","Data":"00ea85117a3a7a613efb0ee0b197731f8faf53d379b635789e63fb18dc175257"} Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.129048 4845 scope.go:117] "RemoveContainer" containerID="9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.154900 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-combined-ca-bundle\") pod \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.155178 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-ovndb-tls-certs\") pod \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.155319 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-httpd-config\") pod \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.155428 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-public-tls-certs\") pod \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.156054 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk9ml\" (UniqueName: \"kubernetes.io/projected/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-kube-api-access-mk9ml\") pod \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.156186 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-internal-tls-certs\") pod \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.156326 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-config\") pod \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\" (UID: \"0d345372-d7c4-4094-b9cb-e2afbd2dbf54\") " Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.165704 4845 scope.go:117] "RemoveContainer" containerID="c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.188102 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0d345372-d7c4-4094-b9cb-e2afbd2dbf54" (UID: "0d345372-d7c4-4094-b9cb-e2afbd2dbf54"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.190135 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-kube-api-access-mk9ml" (OuterVolumeSpecName: "kube-api-access-mk9ml") pod "0d345372-d7c4-4094-b9cb-e2afbd2dbf54" (UID: "0d345372-d7c4-4094-b9cb-e2afbd2dbf54"). InnerVolumeSpecName "kube-api-access-mk9ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.261578 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk9ml\" (UniqueName: \"kubernetes.io/projected/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-kube-api-access-mk9ml\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.261823 4845 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.277264 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0d345372-d7c4-4094-b9cb-e2afbd2dbf54" (UID: "0d345372-d7c4-4094-b9cb-e2afbd2dbf54"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.284158 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-config" (OuterVolumeSpecName: "config") pod "0d345372-d7c4-4094-b9cb-e2afbd2dbf54" (UID: "0d345372-d7c4-4094-b9cb-e2afbd2dbf54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.286012 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0d345372-d7c4-4094-b9cb-e2afbd2dbf54" (UID: "0d345372-d7c4-4094-b9cb-e2afbd2dbf54"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.302008 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d345372-d7c4-4094-b9cb-e2afbd2dbf54" (UID: "0d345372-d7c4-4094-b9cb-e2afbd2dbf54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.318092 4845 scope.go:117] "RemoveContainer" containerID="9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d" Feb 02 10:53:32 crc kubenswrapper[4845]: E0202 10:53:32.318836 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d\": container with ID starting with 9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d not found: ID does not exist" containerID="9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.318936 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d"} err="failed to get container status \"9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d\": rpc error: code = NotFound desc = could not find container \"9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d\": container with ID starting with 9c9dd7addf0d5bf3f26f958d7335764b784970862b4639f903863e4dcd0c828d not found: ID does not exist" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.318977 4845 scope.go:117] "RemoveContainer" containerID="c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618" Feb 02 10:53:32 crc kubenswrapper[4845]: E0202 10:53:32.322140 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618\": container with ID starting with c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618 not found: ID does not exist" containerID="c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.322218 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618"} err="failed to get container status \"c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618\": rpc error: code = NotFound desc = could not find container \"c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618\": container with ID starting with c741a0827d7fd6ab35372c65a855c22f8cc1528974b1ae3a9963caed9499f618 not found: ID does not exist" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.326293 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0d345372-d7c4-4094-b9cb-e2afbd2dbf54" (UID: "0d345372-d7c4-4094-b9cb-e2afbd2dbf54"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.368062 4845 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.368302 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.368402 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.368473 4845 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.368540 4845 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d345372-d7c4-4094-b9cb-e2afbd2dbf54-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.526623 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6898599c95-65qmn"] Feb 02 10:53:32 crc kubenswrapper[4845]: I0202 10:53:32.539429 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6898599c95-65qmn"] Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.563862 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.594770 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rflw4\" (UniqueName: \"kubernetes.io/projected/183b0ef9-490f-43a1-a464-2bd64a820ebd-kube-api-access-rflw4\") pod \"183b0ef9-490f-43a1-a464-2bd64a820ebd\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.594956 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-config-data\") pod \"183b0ef9-490f-43a1-a464-2bd64a820ebd\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.595028 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-combined-ca-bundle\") pod \"183b0ef9-490f-43a1-a464-2bd64a820ebd\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.595080 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-db-sync-config-data\") pod \"183b0ef9-490f-43a1-a464-2bd64a820ebd\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.595151 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/183b0ef9-490f-43a1-a464-2bd64a820ebd-etc-machine-id\") pod \"183b0ef9-490f-43a1-a464-2bd64a820ebd\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.595198 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-scripts\") pod \"183b0ef9-490f-43a1-a464-2bd64a820ebd\" (UID: \"183b0ef9-490f-43a1-a464-2bd64a820ebd\") " Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.600988 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-scripts" (OuterVolumeSpecName: "scripts") pod "183b0ef9-490f-43a1-a464-2bd64a820ebd" (UID: "183b0ef9-490f-43a1-a464-2bd64a820ebd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.603814 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183b0ef9-490f-43a1-a464-2bd64a820ebd-kube-api-access-rflw4" (OuterVolumeSpecName: "kube-api-access-rflw4") pod "183b0ef9-490f-43a1-a464-2bd64a820ebd" (UID: "183b0ef9-490f-43a1-a464-2bd64a820ebd"). InnerVolumeSpecName "kube-api-access-rflw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.612782 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/183b0ef9-490f-43a1-a464-2bd64a820ebd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "183b0ef9-490f-43a1-a464-2bd64a820ebd" (UID: "183b0ef9-490f-43a1-a464-2bd64a820ebd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.613027 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "183b0ef9-490f-43a1-a464-2bd64a820ebd" (UID: "183b0ef9-490f-43a1-a464-2bd64a820ebd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.641747 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "183b0ef9-490f-43a1-a464-2bd64a820ebd" (UID: "183b0ef9-490f-43a1-a464-2bd64a820ebd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.667266 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-config-data" (OuterVolumeSpecName: "config-data") pod "183b0ef9-490f-43a1-a464-2bd64a820ebd" (UID: "183b0ef9-490f-43a1-a464-2bd64a820ebd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.698541 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.698577 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rflw4\" (UniqueName: \"kubernetes.io/projected/183b0ef9-490f-43a1-a464-2bd64a820ebd-kube-api-access-rflw4\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.698591 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.698600 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.698608 4845 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/183b0ef9-490f-43a1-a464-2bd64a820ebd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.698616 4845 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/183b0ef9-490f-43a1-a464-2bd64a820ebd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:33 crc kubenswrapper[4845]: I0202 10:53:33.726200 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" path="/var/lib/kubelet/pods/0d345372-d7c4-4094-b9cb-e2afbd2dbf54/volumes" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.155747 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g8b4r" event={"ID":"183b0ef9-490f-43a1-a464-2bd64a820ebd","Type":"ContainerDied","Data":"8487cd80461a5551ea17adb0f75cb6e4ce51ee5bd0eda70e468bd0162117e3f9"} Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.156132 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8487cd80461a5551ea17adb0f75cb6e4ce51ee5bd0eda70e468bd0162117e3f9" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.155980 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g8b4r" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.415461 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:53:34 crc kubenswrapper[4845]: E0202 10:53:34.425109 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-httpd" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.425142 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-httpd" Feb 02 10:53:34 crc kubenswrapper[4845]: E0202 10:53:34.425167 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-api" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.425173 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-api" Feb 02 10:53:34 crc kubenswrapper[4845]: E0202 10:53:34.425205 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183b0ef9-490f-43a1-a464-2bd64a820ebd" containerName="cinder-db-sync" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.425211 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="183b0ef9-490f-43a1-a464-2bd64a820ebd" containerName="cinder-db-sync" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.425530 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="183b0ef9-490f-43a1-a464-2bd64a820ebd" containerName="cinder-db-sync" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.425562 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-httpd" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.425575 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d345372-d7c4-4094-b9cb-e2afbd2dbf54" containerName="neutron-api" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.426742 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.447944 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.448594 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2glkz" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.448838 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.449007 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.452123 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.514748 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-scripts\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.514841 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlb2n\" (UniqueName: \"kubernetes.io/projected/da14c1bc-4bf5-451d-b547-f4695a1f1099-kube-api-access-mlb2n\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.514970 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.515070 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da14c1bc-4bf5-451d-b547-f4695a1f1099-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.515153 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.515212 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.527764 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qspp8"] Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.530297 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.543155 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qspp8"] Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.620186 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.620262 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.620333 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-scripts\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.620391 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlb2n\" (UniqueName: \"kubernetes.io/projected/da14c1bc-4bf5-451d-b547-f4695a1f1099-kube-api-access-mlb2n\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.620513 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.620606 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da14c1bc-4bf5-451d-b547-f4695a1f1099-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.620727 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da14c1bc-4bf5-451d-b547-f4695a1f1099-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.653730 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlb2n\" (UniqueName: \"kubernetes.io/projected/da14c1bc-4bf5-451d-b547-f4695a1f1099-kube-api-access-mlb2n\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.658793 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-scripts\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.667400 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.678328 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.705931 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.721771 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.722875 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-config\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.722935 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.722980 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.725984 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.726030 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98wr\" (UniqueName: \"kubernetes.io/projected/f57453e0-7229-4521-9d4a-769dc8c888fa-kube-api-access-q98wr\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.726141 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.728227 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.730679 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.752033 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.772124 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.829066 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-config\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.829111 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.829144 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.829271 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.829289 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q98wr\" (UniqueName: \"kubernetes.io/projected/f57453e0-7229-4521-9d4a-769dc8c888fa-kube-api-access-q98wr\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.829323 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.830248 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.831599 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-config\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.832304 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.833054 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.838314 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.862440 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q98wr\" (UniqueName: \"kubernetes.io/projected/f57453e0-7229-4521-9d4a-769dc8c888fa-kube-api-access-q98wr\") pod \"dnsmasq-dns-5c9776ccc5-qspp8\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.952388 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06105adf-bd97-410f-922f-cb54a637955d-logs\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.952515 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data-custom\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.952592 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-scripts\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.952636 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8jww\" (UniqueName: \"kubernetes.io/projected/06105adf-bd97-410f-922f-cb54a637955d-kube-api-access-k8jww\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.952678 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.952754 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:34 crc kubenswrapper[4845]: I0202 10:53:34.952940 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06105adf-bd97-410f-922f-cb54a637955d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.013031 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.058477 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06105adf-bd97-410f-922f-cb54a637955d-logs\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.058576 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data-custom\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.058683 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-scripts\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.058719 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8jww\" (UniqueName: \"kubernetes.io/projected/06105adf-bd97-410f-922f-cb54a637955d-kube-api-access-k8jww\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.058755 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.058816 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.058972 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06105adf-bd97-410f-922f-cb54a637955d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.059080 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06105adf-bd97-410f-922f-cb54a637955d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.059557 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06105adf-bd97-410f-922f-cb54a637955d-logs\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.075421 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-scripts\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.083516 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data-custom\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.083776 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.092031 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.093597 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8jww\" (UniqueName: \"kubernetes.io/projected/06105adf-bd97-410f-922f-cb54a637955d-kube-api-access-k8jww\") pod \"cinder-api-0\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.195769 4845 generic.go:334] "Generic (PLEG): container finished" podID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerID="b43d9c1e2c65f2ee90a2f57f9bbc1d12c9cf5e6cdd4115a797d97d1794b7caa6" exitCode=0 Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.195823 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c848b759d-9s78l" event={"ID":"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98","Type":"ContainerDied","Data":"b43d9c1e2c65f2ee90a2f57f9bbc1d12c9cf5e6cdd4115a797d97d1794b7caa6"} Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.229990 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.330656 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.373356 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data-custom\") pod \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.373454 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn24r\" (UniqueName: \"kubernetes.io/projected/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-kube-api-access-fn24r\") pod \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.373663 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data\") pod \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.373736 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-logs\") pod \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.373930 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-combined-ca-bundle\") pod \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\" (UID: \"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98\") " Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.380018 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" (UID: "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.385692 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-logs" (OuterVolumeSpecName: "logs") pod "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" (UID: "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.388401 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-kube-api-access-fn24r" (OuterVolumeSpecName: "kube-api-access-fn24r") pod "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" (UID: "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98"). InnerVolumeSpecName "kube-api-access-fn24r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.448498 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" (UID: "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.478170 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.478210 4845 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.478226 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn24r\" (UniqueName: \"kubernetes.io/projected/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-kube-api-access-fn24r\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.478239 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.484592 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data" (OuterVolumeSpecName: "config-data") pod "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" (UID: "7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.534193 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.579927 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:35 crc kubenswrapper[4845]: I0202 10:53:35.764445 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qspp8"] Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.017154 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:53:36 crc kubenswrapper[4845]: W0202 10:53:36.019163 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06105adf_bd97_410f_922f_cb54a637955d.slice/crio-10b5879f559c2a482eb1195a6f65f86b7381b0fca31a6c70d7384ff0ddf1d778 WatchSource:0}: Error finding container 10b5879f559c2a482eb1195a6f65f86b7381b0fca31a6c70d7384ff0ddf1d778: Status 404 returned error can't find the container with id 10b5879f559c2a482eb1195a6f65f86b7381b0fca31a6c70d7384ff0ddf1d778 Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.210905 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"06105adf-bd97-410f-922f-cb54a637955d","Type":"ContainerStarted","Data":"10b5879f559c2a482eb1195a6f65f86b7381b0fca31a6c70d7384ff0ddf1d778"} Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.212389 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da14c1bc-4bf5-451d-b547-f4695a1f1099","Type":"ContainerStarted","Data":"27e0db0e179d31ce7ea0e79507fa9b8ddbc8dd15b66fff98db116d1b86140fed"} Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.219179 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c848b759d-9s78l" event={"ID":"7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98","Type":"ContainerDied","Data":"9e1e2aebf9151dc6f3a68108d64e5db12d097d08be7641fdb5a06d1241111d90"} Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.219235 4845 scope.go:117] "RemoveContainer" containerID="b43d9c1e2c65f2ee90a2f57f9bbc1d12c9cf5e6cdd4115a797d97d1794b7caa6" Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.219368 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c848b759d-9s78l" Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.243215 4845 generic.go:334] "Generic (PLEG): container finished" podID="f57453e0-7229-4521-9d4a-769dc8c888fa" containerID="de805387ebe60937953bfa8ca82aa39520aa199cff779fd5003f9f946eb62840" exitCode=0 Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.243261 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" event={"ID":"f57453e0-7229-4521-9d4a-769dc8c888fa","Type":"ContainerDied","Data":"de805387ebe60937953bfa8ca82aa39520aa199cff779fd5003f9f946eb62840"} Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.243308 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" event={"ID":"f57453e0-7229-4521-9d4a-769dc8c888fa","Type":"ContainerStarted","Data":"4772888bdcd0116c54162c4f207b2005eb03b8a93b86ecf4467b09a24d45e538"} Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.247922 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-c848b759d-9s78l"] Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.258816 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-c848b759d-9s78l"] Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.373781 4845 scope.go:117] "RemoveContainer" containerID="176473ee3218eaa7c0eebfe1c20972c9902b004b5527eec40d3ad45e506fbd6b" Feb 02 10:53:36 crc kubenswrapper[4845]: I0202 10:53:36.519259 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:53:37 crc kubenswrapper[4845]: I0202 10:53:37.269409 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" event={"ID":"f57453e0-7229-4521-9d4a-769dc8c888fa","Type":"ContainerStarted","Data":"e5275c3566c176f1596ba660dd53acc1de661d183bfb3ade1e8619590805afd2"} Feb 02 10:53:37 crc kubenswrapper[4845]: I0202 10:53:37.269721 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:37 crc kubenswrapper[4845]: I0202 10:53:37.279412 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"06105adf-bd97-410f-922f-cb54a637955d","Type":"ContainerStarted","Data":"0b451f378e885ddb27d4b9a42dcd386c2a29d3bf05ddbf776583d1ff3fa31571"} Feb 02 10:53:37 crc kubenswrapper[4845]: I0202 10:53:37.282818 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da14c1bc-4bf5-451d-b547-f4695a1f1099","Type":"ContainerStarted","Data":"36d898e83e0f0d7e2fb72b22a68f13dc46338bf7f3eccf52dcd8d9142d72c38d"} Feb 02 10:53:37 crc kubenswrapper[4845]: I0202 10:53:37.292823 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" podStartSLOduration=3.292804764 podStartE2EDuration="3.292804764s" podCreationTimestamp="2026-02-02 10:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:37.290270461 +0000 UTC m=+1298.381671951" watchObservedRunningTime="2026-02-02 10:53:37.292804764 +0000 UTC m=+1298.384206204" Feb 02 10:53:37 crc kubenswrapper[4845]: I0202 10:53:37.728347 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" path="/var/lib/kubelet/pods/7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98/volumes" Feb 02 10:53:38 crc kubenswrapper[4845]: I0202 10:53:38.295389 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"06105adf-bd97-410f-922f-cb54a637955d","Type":"ContainerStarted","Data":"ddf1463358b0ba98c5bf9d38609a756d1bcc25cada5f4b3714a84897700e8709"} Feb 02 10:53:38 crc kubenswrapper[4845]: I0202 10:53:38.295526 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="06105adf-bd97-410f-922f-cb54a637955d" containerName="cinder-api-log" containerID="cri-o://0b451f378e885ddb27d4b9a42dcd386c2a29d3bf05ddbf776583d1ff3fa31571" gracePeriod=30 Feb 02 10:53:38 crc kubenswrapper[4845]: I0202 10:53:38.295567 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 10:53:38 crc kubenswrapper[4845]: I0202 10:53:38.295611 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="06105adf-bd97-410f-922f-cb54a637955d" containerName="cinder-api" containerID="cri-o://ddf1463358b0ba98c5bf9d38609a756d1bcc25cada5f4b3714a84897700e8709" gracePeriod=30 Feb 02 10:53:38 crc kubenswrapper[4845]: I0202 10:53:38.303150 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da14c1bc-4bf5-451d-b547-f4695a1f1099","Type":"ContainerStarted","Data":"1fc49f683570faab3ee6de0a4e2c9f7a1e48f5d5ec7c0c827c712793bfb544b5"} Feb 02 10:53:38 crc kubenswrapper[4845]: I0202 10:53:38.323399 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.323375549 podStartE2EDuration="4.323375549s" podCreationTimestamp="2026-02-02 10:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:38.314303968 +0000 UTC m=+1299.405705418" watchObservedRunningTime="2026-02-02 10:53:38.323375549 +0000 UTC m=+1299.414776999" Feb 02 10:53:38 crc kubenswrapper[4845]: I0202 10:53:38.348262 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.489098286 podStartE2EDuration="4.348245045s" podCreationTimestamp="2026-02-02 10:53:34 +0000 UTC" firstStartedPulling="2026-02-02 10:53:35.589417975 +0000 UTC m=+1296.680819425" lastFinishedPulling="2026-02-02 10:53:36.448564734 +0000 UTC m=+1297.539966184" observedRunningTime="2026-02-02 10:53:38.339684899 +0000 UTC m=+1299.431086359" watchObservedRunningTime="2026-02-02 10:53:38.348245045 +0000 UTC m=+1299.439646495" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.313006 4845 generic.go:334] "Generic (PLEG): container finished" podID="06105adf-bd97-410f-922f-cb54a637955d" containerID="0b451f378e885ddb27d4b9a42dcd386c2a29d3bf05ddbf776583d1ff3fa31571" exitCode=143 Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.313112 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"06105adf-bd97-410f-922f-cb54a637955d","Type":"ContainerDied","Data":"0b451f378e885ddb27d4b9a42dcd386c2a29d3bf05ddbf776583d1ff3fa31571"} Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.609414 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.619114 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68f64c64d8-r7nkx" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.694288 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fd6897c68-cspbg"] Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.694652 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7fd6897c68-cspbg" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-log" containerID="cri-o://72291ce4c24275105ea624fdae6cb6154c6bb75e37f637d2f201663a1789f5ec" gracePeriod=30 Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.695192 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7fd6897c68-cspbg" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-api" containerID="cri-o://a21d26fcb519a4c746b991dcfeca12c3245ddc62427af655c6f5de2c40b04948" gracePeriod=30 Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.699945 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/placement-7fd6897c68-cspbg" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.199:8778/\": EOF" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.700094 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-7fd6897c68-cspbg" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.199:8778/\": EOF" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.700411 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-7fd6897c68-cspbg" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.199:8778/\": EOF" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.700722 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/placement-7fd6897c68-cspbg" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.199:8778/\": EOF" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.707837 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-7fd6897c68-cspbg" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.199:8778/\": EOF" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.708053 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-7fd6897c68-cspbg" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.199:8778/\": EOF" Feb 02 10:53:39 crc kubenswrapper[4845]: I0202 10:53:39.810541 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 10:53:40 crc kubenswrapper[4845]: I0202 10:53:40.056270 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c4f9db54b-5v9r8" Feb 02 10:53:40 crc kubenswrapper[4845]: I0202 10:53:40.325322 4845 generic.go:334] "Generic (PLEG): container finished" podID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerID="72291ce4c24275105ea624fdae6cb6154c6bb75e37f637d2f201663a1789f5ec" exitCode=143 Feb 02 10:53:40 crc kubenswrapper[4845]: I0202 10:53:40.325440 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fd6897c68-cspbg" event={"ID":"3231a338-4ba7-4851-9fd5-a7ba84f13089","Type":"ContainerDied","Data":"72291ce4c24275105ea624fdae6cb6154c6bb75e37f637d2f201663a1789f5ec"} Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.386961 4845 generic.go:334] "Generic (PLEG): container finished" podID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerID="a21d26fcb519a4c746b991dcfeca12c3245ddc62427af655c6f5de2c40b04948" exitCode=0 Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.387050 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fd6897c68-cspbg" event={"ID":"3231a338-4ba7-4851-9fd5-a7ba84f13089","Type":"ContainerDied","Data":"a21d26fcb519a4c746b991dcfeca12c3245ddc62427af655c6f5de2c40b04948"} Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.574397 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.706090 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-internal-tls-certs\") pod \"3231a338-4ba7-4851-9fd5-a7ba84f13089\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.706232 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-public-tls-certs\") pod \"3231a338-4ba7-4851-9fd5-a7ba84f13089\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.706366 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-config-data\") pod \"3231a338-4ba7-4851-9fd5-a7ba84f13089\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.706436 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll5b8\" (UniqueName: \"kubernetes.io/projected/3231a338-4ba7-4851-9fd5-a7ba84f13089-kube-api-access-ll5b8\") pod \"3231a338-4ba7-4851-9fd5-a7ba84f13089\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.706603 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-combined-ca-bundle\") pod \"3231a338-4ba7-4851-9fd5-a7ba84f13089\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.706671 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-scripts\") pod \"3231a338-4ba7-4851-9fd5-a7ba84f13089\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.706749 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3231a338-4ba7-4851-9fd5-a7ba84f13089-logs\") pod \"3231a338-4ba7-4851-9fd5-a7ba84f13089\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.708476 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3231a338-4ba7-4851-9fd5-a7ba84f13089-logs" (OuterVolumeSpecName: "logs") pod "3231a338-4ba7-4851-9fd5-a7ba84f13089" (UID: "3231a338-4ba7-4851-9fd5-a7ba84f13089"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.713457 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3231a338-4ba7-4851-9fd5-a7ba84f13089-kube-api-access-ll5b8" (OuterVolumeSpecName: "kube-api-access-ll5b8") pod "3231a338-4ba7-4851-9fd5-a7ba84f13089" (UID: "3231a338-4ba7-4851-9fd5-a7ba84f13089"). InnerVolumeSpecName "kube-api-access-ll5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.715194 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-scripts" (OuterVolumeSpecName: "scripts") pod "3231a338-4ba7-4851-9fd5-a7ba84f13089" (UID: "3231a338-4ba7-4851-9fd5-a7ba84f13089"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.821531 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.821569 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3231a338-4ba7-4851-9fd5-a7ba84f13089-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.821583 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll5b8\" (UniqueName: \"kubernetes.io/projected/3231a338-4ba7-4851-9fd5-a7ba84f13089-kube-api-access-ll5b8\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.824585 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3231a338-4ba7-4851-9fd5-a7ba84f13089" (UID: "3231a338-4ba7-4851-9fd5-a7ba84f13089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.848829 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 10:53:43 crc kubenswrapper[4845]: E0202 10:53:43.849632 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerName="barbican-api" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.851091 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerName="barbican-api" Feb 02 10:53:43 crc kubenswrapper[4845]: E0202 10:53:43.851128 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-log" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.851173 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-log" Feb 02 10:53:43 crc kubenswrapper[4845]: E0202 10:53:43.851193 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-api" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.851200 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-api" Feb 02 10:53:43 crc kubenswrapper[4845]: E0202 10:53:43.851213 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerName="barbican-api-log" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.851220 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerName="barbican-api-log" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.851516 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-api" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.851585 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerName="barbican-api" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.851600 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" containerName="placement-log" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.851618 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ddd7b77-b40e-4cbd-bce4-eecb7b7eae98" containerName="barbican-api-log" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.852795 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.852940 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.856948 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.857161 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.857343 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xtwwc" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.863031 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-config-data" (OuterVolumeSpecName: "config-data") pod "3231a338-4ba7-4851-9fd5-a7ba84f13089" (UID: "3231a338-4ba7-4851-9fd5-a7ba84f13089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.894987 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3231a338-4ba7-4851-9fd5-a7ba84f13089" (UID: "3231a338-4ba7-4851-9fd5-a7ba84f13089"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.922683 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3231a338-4ba7-4851-9fd5-a7ba84f13089" (UID: "3231a338-4ba7-4851-9fd5-a7ba84f13089"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.923537 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-internal-tls-certs\") pod \"3231a338-4ba7-4851-9fd5-a7ba84f13089\" (UID: \"3231a338-4ba7-4851-9fd5-a7ba84f13089\") " Feb 02 10:53:43 crc kubenswrapper[4845]: W0202 10:53:43.923697 4845 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3231a338-4ba7-4851-9fd5-a7ba84f13089/volumes/kubernetes.io~secret/internal-tls-certs Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.923730 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3231a338-4ba7-4851-9fd5-a7ba84f13089" (UID: "3231a338-4ba7-4851-9fd5-a7ba84f13089"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.923897 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8kc6\" (UniqueName: \"kubernetes.io/projected/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-kube-api-access-f8kc6\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.923949 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.923977 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-openstack-config\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.924324 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-openstack-config-secret\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.924747 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.924765 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.924776 4845 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:43 crc kubenswrapper[4845]: I0202 10:53:43.924786 4845 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3231a338-4ba7-4851-9fd5-a7ba84f13089-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.026487 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-openstack-config\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.026695 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-openstack-config-secret\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.027057 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8kc6\" (UniqueName: \"kubernetes.io/projected/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-kube-api-access-f8kc6\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.027135 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.029432 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-openstack-config\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.032302 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.033367 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-openstack-config-secret\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.046019 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8kc6\" (UniqueName: \"kubernetes.io/projected/c10a41f9-4bda-4d90-81c1-09ed21f00b2b-kube-api-access-f8kc6\") pod \"openstackclient\" (UID: \"c10a41f9-4bda-4d90-81c1-09ed21f00b2b\") " pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.181837 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.412369 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fd6897c68-cspbg" event={"ID":"3231a338-4ba7-4851-9fd5-a7ba84f13089","Type":"ContainerDied","Data":"228de4e8bc7c765fb5d366d131a0a0268b9b4fac526b28962bf893b2beef69a2"} Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.412733 4845 scope.go:117] "RemoveContainer" containerID="a21d26fcb519a4c746b991dcfeca12c3245ddc62427af655c6f5de2c40b04948" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.412449 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fd6897c68-cspbg" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.452780 4845 scope.go:117] "RemoveContainer" containerID="72291ce4c24275105ea624fdae6cb6154c6bb75e37f637d2f201663a1789f5ec" Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.457955 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fd6897c68-cspbg"] Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.467506 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7fd6897c68-cspbg"] Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.683963 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 10:53:44 crc kubenswrapper[4845]: W0202 10:53:44.684525 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc10a41f9_4bda_4d90_81c1_09ed21f00b2b.slice/crio-d77ee8bee66783f8327bc67c1ef3799f9c9d870eca7bb6938cdb74b592de47d9 WatchSource:0}: Error finding container d77ee8bee66783f8327bc67c1ef3799f9c9d870eca7bb6938cdb74b592de47d9: Status 404 returned error can't find the container with id d77ee8bee66783f8327bc67c1ef3799f9c9d870eca7bb6938cdb74b592de47d9 Feb 02 10:53:44 crc kubenswrapper[4845]: I0202 10:53:44.984380 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.014053 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.069111 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.125502 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4vk87"] Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.125736 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" podUID="1e98b98e-a993-4000-90f3-3372541369fb" containerName="dnsmasq-dns" containerID="cri-o://d65ee280aa67a3b9d4ada8a3d8e251139871dc36b778ac679ad93aaaa994de0c" gracePeriod=10 Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.426242 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c10a41f9-4bda-4d90-81c1-09ed21f00b2b","Type":"ContainerStarted","Data":"d77ee8bee66783f8327bc67c1ef3799f9c9d870eca7bb6938cdb74b592de47d9"} Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.438066 4845 generic.go:334] "Generic (PLEG): container finished" podID="1e98b98e-a993-4000-90f3-3372541369fb" containerID="d65ee280aa67a3b9d4ada8a3d8e251139871dc36b778ac679ad93aaaa994de0c" exitCode=0 Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.438135 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" event={"ID":"1e98b98e-a993-4000-90f3-3372541369fb","Type":"ContainerDied","Data":"d65ee280aa67a3b9d4ada8a3d8e251139871dc36b778ac679ad93aaaa994de0c"} Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.444916 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerName="cinder-scheduler" containerID="cri-o://36d898e83e0f0d7e2fb72b22a68f13dc46338bf7f3eccf52dcd8d9142d72c38d" gracePeriod=30 Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.444969 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerName="probe" containerID="cri-o://1fc49f683570faab3ee6de0a4e2c9f7a1e48f5d5ec7c0c827c712793bfb544b5" gracePeriod=30 Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.768282 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.768766 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3231a338-4ba7-4851-9fd5-a7ba84f13089" path="/var/lib/kubelet/pods/3231a338-4ba7-4851-9fd5-a7ba84f13089/volumes" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.771709 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-sb\") pod \"1e98b98e-a993-4000-90f3-3372541369fb\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.771782 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c85th\" (UniqueName: \"kubernetes.io/projected/1e98b98e-a993-4000-90f3-3372541369fb-kube-api-access-c85th\") pod \"1e98b98e-a993-4000-90f3-3372541369fb\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.772958 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-nb\") pod \"1e98b98e-a993-4000-90f3-3372541369fb\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.773042 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-svc\") pod \"1e98b98e-a993-4000-90f3-3372541369fb\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.773130 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-swift-storage-0\") pod \"1e98b98e-a993-4000-90f3-3372541369fb\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.773188 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-config\") pod \"1e98b98e-a993-4000-90f3-3372541369fb\" (UID: \"1e98b98e-a993-4000-90f3-3372541369fb\") " Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.783915 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e98b98e-a993-4000-90f3-3372541369fb-kube-api-access-c85th" (OuterVolumeSpecName: "kube-api-access-c85th") pod "1e98b98e-a993-4000-90f3-3372541369fb" (UID: "1e98b98e-a993-4000-90f3-3372541369fb"). InnerVolumeSpecName "kube-api-access-c85th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.876143 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c85th\" (UniqueName: \"kubernetes.io/projected/1e98b98e-a993-4000-90f3-3372541369fb-kube-api-access-c85th\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.894407 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e98b98e-a993-4000-90f3-3372541369fb" (UID: "1e98b98e-a993-4000-90f3-3372541369fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.930479 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1e98b98e-a993-4000-90f3-3372541369fb" (UID: "1e98b98e-a993-4000-90f3-3372541369fb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.932646 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e98b98e-a993-4000-90f3-3372541369fb" (UID: "1e98b98e-a993-4000-90f3-3372541369fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.946328 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e98b98e-a993-4000-90f3-3372541369fb" (UID: "1e98b98e-a993-4000-90f3-3372541369fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.949423 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-config" (OuterVolumeSpecName: "config") pod "1e98b98e-a993-4000-90f3-3372541369fb" (UID: "1e98b98e-a993-4000-90f3-3372541369fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.976880 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.977145 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.977156 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.977165 4845 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:45 crc kubenswrapper[4845]: I0202 10:53:45.977174 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e98b98e-a993-4000-90f3-3372541369fb-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:46 crc kubenswrapper[4845]: E0202 10:53:46.196919 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda14c1bc_4bf5_451d_b547_f4695a1f1099.slice/crio-conmon-1fc49f683570faab3ee6de0a4e2c9f7a1e48f5d5ec7c0c827c712793bfb544b5.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.237461 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.237551 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.474101 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" event={"ID":"1e98b98e-a993-4000-90f3-3372541369fb","Type":"ContainerDied","Data":"46aea51eef4e5bca23196251795de36a78dad7c530d09f830de8c2b61b899f53"} Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.474166 4845 scope.go:117] "RemoveContainer" containerID="d65ee280aa67a3b9d4ada8a3d8e251139871dc36b778ac679ad93aaaa994de0c" Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.474322 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.482271 4845 generic.go:334] "Generic (PLEG): container finished" podID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerID="1fc49f683570faab3ee6de0a4e2c9f7a1e48f5d5ec7c0c827c712793bfb544b5" exitCode=0 Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.482326 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da14c1bc-4bf5-451d-b547-f4695a1f1099","Type":"ContainerDied","Data":"1fc49f683570faab3ee6de0a4e2c9f7a1e48f5d5ec7c0c827c712793bfb544b5"} Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.520008 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4vk87"] Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.523300 4845 scope.go:117] "RemoveContainer" containerID="a506c6cd7f566d8573c6029c7963e6c4b4d345abfac828b450e7916b80814864" Feb 02 10:53:46 crc kubenswrapper[4845]: I0202 10:53:46.546866 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4vk87"] Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.736380 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e98b98e-a993-4000-90f3-3372541369fb" path="/var/lib/kubelet/pods/1e98b98e-a993-4000-90f3-3372541369fb/volumes" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.797311 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-67878d9fbc-npvwk"] Feb 02 10:53:47 crc kubenswrapper[4845]: E0202 10:53:47.797983 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e98b98e-a993-4000-90f3-3372541369fb" containerName="dnsmasq-dns" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.798081 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e98b98e-a993-4000-90f3-3372541369fb" containerName="dnsmasq-dns" Feb 02 10:53:47 crc kubenswrapper[4845]: E0202 10:53:47.798148 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e98b98e-a993-4000-90f3-3372541369fb" containerName="init" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.798232 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e98b98e-a993-4000-90f3-3372541369fb" containerName="init" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.798513 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e98b98e-a993-4000-90f3-3372541369fb" containerName="dnsmasq-dns" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.799743 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.801696 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.802698 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.803316 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.812445 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67878d9fbc-npvwk"] Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.816479 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-public-tls-certs\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.816534 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrb7\" (UniqueName: \"kubernetes.io/projected/1d9f4b80-6273-4d77-9309-2ffecc5acc64-kube-api-access-mcrb7\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.816569 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-config-data\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.816650 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-internal-tls-certs\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.816726 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-combined-ca-bundle\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.816756 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d9f4b80-6273-4d77-9309-2ffecc5acc64-log-httpd\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.816790 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d9f4b80-6273-4d77-9309-2ffecc5acc64-run-httpd\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.816846 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f4b80-6273-4d77-9309-2ffecc5acc64-etc-swift\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.919112 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-combined-ca-bundle\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.919347 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d9f4b80-6273-4d77-9309-2ffecc5acc64-log-httpd\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.919443 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d9f4b80-6273-4d77-9309-2ffecc5acc64-run-httpd\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.919559 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f4b80-6273-4d77-9309-2ffecc5acc64-etc-swift\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.919747 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-public-tls-certs\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.919866 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrb7\" (UniqueName: \"kubernetes.io/projected/1d9f4b80-6273-4d77-9309-2ffecc5acc64-kube-api-access-mcrb7\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.919986 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-config-data\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.920109 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-internal-tls-certs\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.928792 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-public-tls-certs\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.929694 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d9f4b80-6273-4d77-9309-2ffecc5acc64-log-httpd\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.930536 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d9f4b80-6273-4d77-9309-2ffecc5acc64-run-httpd\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.934234 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-combined-ca-bundle\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.934376 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-internal-tls-certs\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.935450 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d9f4b80-6273-4d77-9309-2ffecc5acc64-config-data\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.944203 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f4b80-6273-4d77-9309-2ffecc5acc64-etc-swift\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.957895 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrb7\" (UniqueName: \"kubernetes.io/projected/1d9f4b80-6273-4d77-9309-2ffecc5acc64-kube-api-access-mcrb7\") pod \"swift-proxy-67878d9fbc-npvwk\" (UID: \"1d9f4b80-6273-4d77-9309-2ffecc5acc64\") " pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:47 crc kubenswrapper[4845]: I0202 10:53:47.995725 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 10:53:48 crc kubenswrapper[4845]: I0202 10:53:48.143485 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:48 crc kubenswrapper[4845]: I0202 10:53:48.894723 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67878d9fbc-npvwk"] Feb 02 10:53:49 crc kubenswrapper[4845]: I0202 10:53:49.521981 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67878d9fbc-npvwk" event={"ID":"1d9f4b80-6273-4d77-9309-2ffecc5acc64","Type":"ContainerStarted","Data":"5cea917cb8a90d8b39a3c28df04d02236b975af47d729fa57e6fca4b89381be8"} Feb 02 10:53:49 crc kubenswrapper[4845]: I0202 10:53:49.522661 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67878d9fbc-npvwk" event={"ID":"1d9f4b80-6273-4d77-9309-2ffecc5acc64","Type":"ContainerStarted","Data":"528266d746d8fd6ae55a22daa001556d0d8858e1559232b3016ca31c80fed2cf"} Feb 02 10:53:49 crc kubenswrapper[4845]: I0202 10:53:49.522683 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67878d9fbc-npvwk" event={"ID":"1d9f4b80-6273-4d77-9309-2ffecc5acc64","Type":"ContainerStarted","Data":"fc0768979a31fc82dc6576fdb675fd4f1aba2309fa75e8c4607d913f063143a6"} Feb 02 10:53:49 crc kubenswrapper[4845]: I0202 10:53:49.522714 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:49 crc kubenswrapper[4845]: I0202 10:53:49.553965 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-67878d9fbc-npvwk" podStartSLOduration=2.553934486 podStartE2EDuration="2.553934486s" podCreationTimestamp="2026-02-02 10:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:49.541102406 +0000 UTC m=+1310.632503856" watchObservedRunningTime="2026-02-02 10:53:49.553934486 +0000 UTC m=+1310.645335956" Feb 02 10:53:50 crc kubenswrapper[4845]: I0202 10:53:50.537695 4845 generic.go:334] "Generic (PLEG): container finished" podID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerID="36d898e83e0f0d7e2fb72b22a68f13dc46338bf7f3eccf52dcd8d9142d72c38d" exitCode=0 Feb 02 10:53:50 crc kubenswrapper[4845]: I0202 10:53:50.538990 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da14c1bc-4bf5-451d-b547-f4695a1f1099","Type":"ContainerDied","Data":"36d898e83e0f0d7e2fb72b22a68f13dc46338bf7f3eccf52dcd8d9142d72c38d"} Feb 02 10:53:50 crc kubenswrapper[4845]: I0202 10:53:50.539042 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:50 crc kubenswrapper[4845]: I0202 10:53:50.632935 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-4vk87" podUID="1e98b98e-a993-4000-90f3-3372541369fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.203:5353: i/o timeout" Feb 02 10:53:51 crc kubenswrapper[4845]: I0202 10:53:51.683796 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:53:51 crc kubenswrapper[4845]: I0202 10:53:51.684433 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="ceilometer-central-agent" containerID="cri-o://fbe9c7e888f7c781c1053770c1b8f9cfc807348795f8c409217a85d5272ec120" gracePeriod=30 Feb 02 10:53:51 crc kubenswrapper[4845]: I0202 10:53:51.685008 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="proxy-httpd" containerID="cri-o://8728f9654023c3d16f466f99002e731e63223d9431b4fa5e2833683a43d715a8" gracePeriod=30 Feb 02 10:53:51 crc kubenswrapper[4845]: I0202 10:53:51.685043 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="sg-core" containerID="cri-o://667445270cb8d9e7bc55a98bf683b24cb0b8dad3cdc9fba27ac254a584303435" gracePeriod=30 Feb 02 10:53:51 crc kubenswrapper[4845]: I0202 10:53:51.685025 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="ceilometer-notification-agent" containerID="cri-o://cf0af5c5e3f564b6556befcc6a6f25d0bf34be4299c282d46a9c4ef4ba6b6020" gracePeriod=30 Feb 02 10:53:51 crc kubenswrapper[4845]: I0202 10:53:51.712118 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.571118 4845 generic.go:334] "Generic (PLEG): container finished" podID="e4430b5f-6421-41e2-b338-3b215c57957a" containerID="8728f9654023c3d16f466f99002e731e63223d9431b4fa5e2833683a43d715a8" exitCode=0 Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.571587 4845 generic.go:334] "Generic (PLEG): container finished" podID="e4430b5f-6421-41e2-b338-3b215c57957a" containerID="667445270cb8d9e7bc55a98bf683b24cb0b8dad3cdc9fba27ac254a584303435" exitCode=2 Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.571600 4845 generic.go:334] "Generic (PLEG): container finished" podID="e4430b5f-6421-41e2-b338-3b215c57957a" containerID="fbe9c7e888f7c781c1053770c1b8f9cfc807348795f8c409217a85d5272ec120" exitCode=0 Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.571219 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerDied","Data":"8728f9654023c3d16f466f99002e731e63223d9431b4fa5e2833683a43d715a8"} Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.571660 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerDied","Data":"667445270cb8d9e7bc55a98bf683b24cb0b8dad3cdc9fba27ac254a584303435"} Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.571679 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerDied","Data":"fbe9c7e888f7c781c1053770c1b8f9cfc807348795f8c409217a85d5272ec120"} Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.774437 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-567746f76f-zjfmt"] Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.776355 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.785520 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.785746 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.786003 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-2czql" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.797605 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-567746f76f-zjfmt"] Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.904582 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-nh2sl"] Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.906943 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.936307 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-nh2sl"] Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.958435 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6hs\" (UniqueName: \"kubernetes.io/projected/30fbb4bb-1391-411d-adda-a41d223aed00-kube-api-access-cf6hs\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.958668 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-combined-ca-bundle\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.958805 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.958842 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data-custom\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.976956 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-b9b757468-zfd7s"] Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.978724 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:52 crc kubenswrapper[4845]: I0202 10:53:52.983442 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.007358 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b9b757468-zfd7s"] Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.067842 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbcn2\" (UniqueName: \"kubernetes.io/projected/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-kube-api-access-sbcn2\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.067998 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.068055 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.068082 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data-custom\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.068221 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.068342 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-config\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.068385 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.068406 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.068950 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf6hs\" (UniqueName: \"kubernetes.io/projected/30fbb4bb-1391-411d-adda-a41d223aed00-kube-api-access-cf6hs\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.069420 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-combined-ca-bundle\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.087430 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.088089 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data-custom\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.088483 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-combined-ca-bundle\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.095236 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-b475b44dc-fr2qw"] Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.103165 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.111835 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.113300 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf6hs\" (UniqueName: \"kubernetes.io/projected/30fbb4bb-1391-411d-adda-a41d223aed00-kube-api-access-cf6hs\") pod \"heat-engine-567746f76f-zjfmt\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.116405 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.155410 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b475b44dc-fr2qw"] Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173407 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbcn2\" (UniqueName: \"kubernetes.io/projected/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-kube-api-access-sbcn2\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173488 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173514 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173545 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjss5\" (UniqueName: \"kubernetes.io/projected/08e1823d-46cd-40c5-bea1-162473f9a4ce-kube-api-access-vjss5\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173569 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data-custom\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173601 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173655 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-config\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173684 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173699 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.173756 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-combined-ca-bundle\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.174871 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.175431 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.176408 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.176954 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.177012 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-config\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.208130 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbcn2\" (UniqueName: \"kubernetes.io/projected/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-kube-api-access-sbcn2\") pod \"dnsmasq-dns-7756b9d78c-nh2sl\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.243253 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.276734 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data-custom\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.276809 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-combined-ca-bundle\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.276836 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.276928 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c57d\" (UniqueName: \"kubernetes.io/projected/81ca8604-de7c-4752-8bda-89fccd0c1218-kube-api-access-7c57d\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.276984 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.277015 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjss5\" (UniqueName: \"kubernetes.io/projected/08e1823d-46cd-40c5-bea1-162473f9a4ce-kube-api-access-vjss5\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.277036 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data-custom\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.277128 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-combined-ca-bundle\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.281160 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-combined-ca-bundle\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.286645 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.292970 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data-custom\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.304737 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjss5\" (UniqueName: \"kubernetes.io/projected/08e1823d-46cd-40c5-bea1-162473f9a4ce-kube-api-access-vjss5\") pod \"heat-cfnapi-b9b757468-zfd7s\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.315424 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.379477 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data-custom\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.379555 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.379642 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c57d\" (UniqueName: \"kubernetes.io/projected/81ca8604-de7c-4752-8bda-89fccd0c1218-kube-api-access-7c57d\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.379775 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-combined-ca-bundle\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.383873 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data-custom\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.386073 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-combined-ca-bundle\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.386902 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.403821 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c57d\" (UniqueName: \"kubernetes.io/projected/81ca8604-de7c-4752-8bda-89fccd0c1218-kube-api-access-7c57d\") pod \"heat-api-b475b44dc-fr2qw\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.406384 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.586700 4845 generic.go:334] "Generic (PLEG): container finished" podID="e4430b5f-6421-41e2-b338-3b215c57957a" containerID="cf0af5c5e3f564b6556befcc6a6f25d0bf34be4299c282d46a9c4ef4ba6b6020" exitCode=0 Feb 02 10:53:53 crc kubenswrapper[4845]: I0202 10:53:53.586740 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerDied","Data":"cf0af5c5e3f564b6556befcc6a6f25d0bf34be4299c282d46a9c4ef4ba6b6020"} Feb 02 10:53:55 crc kubenswrapper[4845]: I0202 10:53:55.401254 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.206:3000/\": dial tcp 10.217.0.206:3000: connect: connection refused" Feb 02 10:53:56 crc kubenswrapper[4845]: I0202 10:53:56.484981 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:53:56 crc kubenswrapper[4845]: I0202 10:53:56.487745 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" containerName="glance-log" containerID="cri-o://1e24bbe2d8cd0583fc986cf6fd412f527daea58b4a85706eb389322bf6ad3af7" gracePeriod=30 Feb 02 10:53:56 crc kubenswrapper[4845]: I0202 10:53:56.488276 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" containerName="glance-httpd" containerID="cri-o://5c7e1e0f5ba6836be4b2cc0a23514474d1930ec71f3bf7b3e6b27bbccac7ee40" gracePeriod=30 Feb 02 10:53:56 crc kubenswrapper[4845]: I0202 10:53:56.641949 4845 generic.go:334] "Generic (PLEG): container finished" podID="48aa6807-1e0b-4eab-8255-01c885a24550" containerID="1e24bbe2d8cd0583fc986cf6fd412f527daea58b4a85706eb389322bf6ad3af7" exitCode=143 Feb 02 10:53:56 crc kubenswrapper[4845]: I0202 10:53:56.642003 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48aa6807-1e0b-4eab-8255-01c885a24550","Type":"ContainerDied","Data":"1e24bbe2d8cd0583fc986cf6fd412f527daea58b4a85706eb389322bf6ad3af7"} Feb 02 10:53:57 crc kubenswrapper[4845]: I0202 10:53:57.963399 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b8db7b6ff-lx6zl" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.069055 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c658d9d4-mvn9b"] Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.069684 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c658d9d4-mvn9b" podUID="381d0503-4113-48e1-a344-88e990400075" containerName="neutron-api" containerID="cri-o://2ea8dbd44d235dcde26b7387e5ca4d94d64fb20bb5d89d73c2a48c90da6ef1d6" gracePeriod=30 Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.070340 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c658d9d4-mvn9b" podUID="381d0503-4113-48e1-a344-88e990400075" containerName="neutron-httpd" containerID="cri-o://0318987d82017a4372eee65da6e584e622e1a0531f87350923bd3ea8ad37c0e3" gracePeriod=30 Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.153766 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.165319 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67878d9fbc-npvwk" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.705247 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.707111 4845 generic.go:334] "Generic (PLEG): container finished" podID="381d0503-4113-48e1-a344-88e990400075" containerID="0318987d82017a4372eee65da6e584e622e1a0531f87350923bd3ea8ad37c0e3" exitCode=0 Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.707187 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c658d9d4-mvn9b" event={"ID":"381d0503-4113-48e1-a344-88e990400075","Type":"ContainerDied","Data":"0318987d82017a4372eee65da6e584e622e1a0531f87350923bd3ea8ad37c0e3"} Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.712685 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.712954 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da14c1bc-4bf5-451d-b547-f4695a1f1099","Type":"ContainerDied","Data":"27e0db0e179d31ce7ea0e79507fa9b8ddbc8dd15b66fff98db116d1b86140fed"} Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.713109 4845 scope.go:117] "RemoveContainer" containerID="1fc49f683570faab3ee6de0a4e2c9f7a1e48f5d5ec7c0c827c712793bfb544b5" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.769128 4845 scope.go:117] "RemoveContainer" containerID="36d898e83e0f0d7e2fb72b22a68f13dc46338bf7f3eccf52dcd8d9142d72c38d" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.833639 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data\") pod \"da14c1bc-4bf5-451d-b547-f4695a1f1099\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.833925 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-combined-ca-bundle\") pod \"da14c1bc-4bf5-451d-b547-f4695a1f1099\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.833998 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da14c1bc-4bf5-451d-b547-f4695a1f1099-etc-machine-id\") pod \"da14c1bc-4bf5-451d-b547-f4695a1f1099\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.834075 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data-custom\") pod \"da14c1bc-4bf5-451d-b547-f4695a1f1099\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.834116 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlb2n\" (UniqueName: \"kubernetes.io/projected/da14c1bc-4bf5-451d-b547-f4695a1f1099-kube-api-access-mlb2n\") pod \"da14c1bc-4bf5-451d-b547-f4695a1f1099\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.834178 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-scripts\") pod \"da14c1bc-4bf5-451d-b547-f4695a1f1099\" (UID: \"da14c1bc-4bf5-451d-b547-f4695a1f1099\") " Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.835496 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da14c1bc-4bf5-451d-b547-f4695a1f1099-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "da14c1bc-4bf5-451d-b547-f4695a1f1099" (UID: "da14c1bc-4bf5-451d-b547-f4695a1f1099"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.839946 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da14c1bc-4bf5-451d-b547-f4695a1f1099-kube-api-access-mlb2n" (OuterVolumeSpecName: "kube-api-access-mlb2n") pod "da14c1bc-4bf5-451d-b547-f4695a1f1099" (UID: "da14c1bc-4bf5-451d-b547-f4695a1f1099"). InnerVolumeSpecName "kube-api-access-mlb2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.845487 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "da14c1bc-4bf5-451d-b547-f4695a1f1099" (UID: "da14c1bc-4bf5-451d-b547-f4695a1f1099"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.845663 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-scripts" (OuterVolumeSpecName: "scripts") pod "da14c1bc-4bf5-451d-b547-f4695a1f1099" (UID: "da14c1bc-4bf5-451d-b547-f4695a1f1099"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.944531 4845 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da14c1bc-4bf5-451d-b547-f4695a1f1099-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.944833 4845 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.944848 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlb2n\" (UniqueName: \"kubernetes.io/projected/da14c1bc-4bf5-451d-b547-f4695a1f1099-kube-api-access-mlb2n\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.944861 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:58 crc kubenswrapper[4845]: I0202 10:53:58.990276 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.111170 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da14c1bc-4bf5-451d-b547-f4695a1f1099" (UID: "da14c1bc-4bf5-451d-b547-f4695a1f1099"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.148596 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-sg-core-conf-yaml\") pod \"e4430b5f-6421-41e2-b338-3b215c57957a\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.148674 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-combined-ca-bundle\") pod \"e4430b5f-6421-41e2-b338-3b215c57957a\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.148747 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-run-httpd\") pod \"e4430b5f-6421-41e2-b338-3b215c57957a\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.148771 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z6ch\" (UniqueName: \"kubernetes.io/projected/e4430b5f-6421-41e2-b338-3b215c57957a-kube-api-access-7z6ch\") pod \"e4430b5f-6421-41e2-b338-3b215c57957a\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.148801 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-log-httpd\") pod \"e4430b5f-6421-41e2-b338-3b215c57957a\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.148916 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-config-data\") pod \"e4430b5f-6421-41e2-b338-3b215c57957a\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.148956 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-scripts\") pod \"e4430b5f-6421-41e2-b338-3b215c57957a\" (UID: \"e4430b5f-6421-41e2-b338-3b215c57957a\") " Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.149695 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.149799 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4430b5f-6421-41e2-b338-3b215c57957a" (UID: "e4430b5f-6421-41e2-b338-3b215c57957a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.149960 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4430b5f-6421-41e2-b338-3b215c57957a" (UID: "e4430b5f-6421-41e2-b338-3b215c57957a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.153560 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-scripts" (OuterVolumeSpecName: "scripts") pod "e4430b5f-6421-41e2-b338-3b215c57957a" (UID: "e4430b5f-6421-41e2-b338-3b215c57957a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.154247 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4430b5f-6421-41e2-b338-3b215c57957a-kube-api-access-7z6ch" (OuterVolumeSpecName: "kube-api-access-7z6ch") pod "e4430b5f-6421-41e2-b338-3b215c57957a" (UID: "e4430b5f-6421-41e2-b338-3b215c57957a"). InnerVolumeSpecName "kube-api-access-7z6ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.188140 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4430b5f-6421-41e2-b338-3b215c57957a" (UID: "e4430b5f-6421-41e2-b338-3b215c57957a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.208687 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data" (OuterVolumeSpecName: "config-data") pod "da14c1bc-4bf5-451d-b547-f4695a1f1099" (UID: "da14c1bc-4bf5-451d-b547-f4695a1f1099"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.254394 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da14c1bc-4bf5-451d-b547-f4695a1f1099-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.254453 4845 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.254468 4845 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.254481 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z6ch\" (UniqueName: \"kubernetes.io/projected/e4430b5f-6421-41e2-b338-3b215c57957a-kube-api-access-7z6ch\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.254493 4845 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4430b5f-6421-41e2-b338-3b215c57957a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.254504 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.296067 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4430b5f-6421-41e2-b338-3b215c57957a" (UID: "e4430b5f-6421-41e2-b338-3b215c57957a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.298053 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-config-data" (OuterVolumeSpecName: "config-data") pod "e4430b5f-6421-41e2-b338-3b215c57957a" (UID: "e4430b5f-6421-41e2-b338-3b215c57957a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.356956 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.357305 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4430b5f-6421-41e2-b338-3b215c57957a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.375787 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.392581 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.412950 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:53:59 crc kubenswrapper[4845]: E0202 10:53:59.413595 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="ceilometer-notification-agent" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.413620 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="ceilometer-notification-agent" Feb 02 10:53:59 crc kubenswrapper[4845]: E0202 10:53:59.413636 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="sg-core" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.413648 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="sg-core" Feb 02 10:53:59 crc kubenswrapper[4845]: E0202 10:53:59.413663 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="proxy-httpd" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.413670 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="proxy-httpd" Feb 02 10:53:59 crc kubenswrapper[4845]: E0202 10:53:59.413681 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerName="cinder-scheduler" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.413687 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerName="cinder-scheduler" Feb 02 10:53:59 crc kubenswrapper[4845]: E0202 10:53:59.413709 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="ceilometer-central-agent" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.413717 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="ceilometer-central-agent" Feb 02 10:53:59 crc kubenswrapper[4845]: E0202 10:53:59.413739 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerName="probe" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.413748 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerName="probe" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.414011 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="ceilometer-notification-agent" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.414039 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerName="cinder-scheduler" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.414054 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="sg-core" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.414069 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" containerName="probe" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.414089 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="ceilometer-central-agent" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.414102 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" containerName="proxy-httpd" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.415618 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.419090 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.445321 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.462341 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3483568c-cbaa-4f63-94e5-36d1a9534d31-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.462674 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-scripts\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.462768 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwlpt\" (UniqueName: \"kubernetes.io/projected/3483568c-cbaa-4f63-94e5-36d1a9534d31-kube-api-access-jwlpt\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.462964 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.463008 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.463154 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-config-data\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.610866 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-nh2sl"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.713698 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3483568c-cbaa-4f63-94e5-36d1a9534d31-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.714264 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3483568c-cbaa-4f63-94e5-36d1a9534d31-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.734897 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-scripts\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.735257 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwlpt\" (UniqueName: \"kubernetes.io/projected/3483568c-cbaa-4f63-94e5-36d1a9534d31-kube-api-access-jwlpt\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.735380 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.735425 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.735509 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-config-data\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.740660 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-config-data\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.742761 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-scripts\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.745778 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.754923 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.763711 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwlpt\" (UniqueName: \"kubernetes.io/projected/3483568c-cbaa-4f63-94e5-36d1a9534d31-kube-api-access-jwlpt\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.768124 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3483568c-cbaa-4f63-94e5-36d1a9534d31-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3483568c-cbaa-4f63-94e5-36d1a9534d31\") " pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.819761 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da14c1bc-4bf5-451d-b547-f4695a1f1099" path="/var/lib/kubelet/pods/da14c1bc-4bf5-451d-b547-f4695a1f1099/volumes" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.821692 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.821970 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c10a41f9-4bda-4d90-81c1-09ed21f00b2b","Type":"ContainerStarted","Data":"07e9c8a05eed62468e720f63d1bfeab6d0ec45c2ff37d7ae78305724817dc630"} Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.821999 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-567746f76f-zjfmt" event={"ID":"30fbb4bb-1391-411d-adda-a41d223aed00","Type":"ContainerStarted","Data":"2ed051f1edd72eac12419e5ea83d9cdd76f867860395bc32f8a07b0d289f477d"} Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.822012 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4430b5f-6421-41e2-b338-3b215c57957a","Type":"ContainerDied","Data":"253fc8db05e07f1a530e09a4e9ff070908466c9701a4e2cca1a5c237104581b4"} Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.822027 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" event={"ID":"2ab561fd-1cd4-43c4-a09d-401ca966b4bb","Type":"ContainerStarted","Data":"250cda2944b48f437289aaa5905e90991c9b8dc1b5cb5593f83b10af7cbf343a"} Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.822040 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-567746f76f-zjfmt"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.822056 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b9b757468-zfd7s"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.822076 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-658dbb4bcd-qn5fs"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.822105 4845 scope.go:117] "RemoveContainer" containerID="8728f9654023c3d16f466f99002e731e63223d9431b4fa5e2833683a43d715a8" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.829197 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b475b44dc-fr2qw"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.829232 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-658dbb4bcd-qn5fs"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.829245 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-655bdf96f4-zpj7r"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.829771 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.830272 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.834494 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b9b757468-zfd7s" event={"ID":"08e1823d-46cd-40c5-bea1-162473f9a4ce","Type":"ContainerStarted","Data":"99d07802bcf1c0ba9e3fd5b262b073cf214a20ee31d4f7170adba1e4c7702081"} Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.844276 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b475b44dc-fr2qw" event={"ID":"81ca8604-de7c-4752-8bda-89fccd0c1218","Type":"ContainerStarted","Data":"d41fe68659e16adfad7a630c185917e34c6be9d15f8fc9f1160f015fdca6072a"} Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.846396 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data-custom\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.846640 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-combined-ca-bundle\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.846804 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8jn2\" (UniqueName: \"kubernetes.io/projected/1b30c63b-6407-4f1d-a393-c4f33a758db8-kube-api-access-j8jn2\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.849206 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.865484 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.866200 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6d94ddcf58-g79zr"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.873643 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.890265 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-655bdf96f4-zpj7r"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.908799 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d94ddcf58-g79zr"] Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.951077 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-combined-ca-bundle\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.953795 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8jn2\" (UniqueName: \"kubernetes.io/projected/1b30c63b-6407-4f1d-a393-c4f33a758db8-kube-api-access-j8jn2\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.953834 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-config-data\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.954105 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7dfp\" (UniqueName: \"kubernetes.io/projected/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-kube-api-access-z7dfp\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.955329 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-config-data-custom\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.955445 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.956062 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data-custom\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.956134 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-combined-ca-bundle\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.958364 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.073019916 podStartE2EDuration="16.958348646s" podCreationTimestamp="2026-02-02 10:53:43 +0000 UTC" firstStartedPulling="2026-02-02 10:53:44.687088129 +0000 UTC m=+1305.778489579" lastFinishedPulling="2026-02-02 10:53:58.572416859 +0000 UTC m=+1319.663818309" observedRunningTime="2026-02-02 10:53:59.894435095 +0000 UTC m=+1320.985836545" watchObservedRunningTime="2026-02-02 10:53:59.958348646 +0000 UTC m=+1321.049750096" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.963748 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.963761 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data-custom\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.964334 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-combined-ca-bundle\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:53:59 crc kubenswrapper[4845]: I0202 10:53:59.978119 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8jn2\" (UniqueName: \"kubernetes.io/projected/1b30c63b-6407-4f1d-a393-c4f33a758db8-kube-api-access-j8jn2\") pod \"heat-api-655bdf96f4-zpj7r\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.023670 4845 scope.go:117] "RemoveContainer" containerID="667445270cb8d9e7bc55a98bf683b24cb0b8dad3cdc9fba27ac254a584303435" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.037737 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.048404 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.060096 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzds\" (UniqueName: \"kubernetes.io/projected/06e2175d-c446-4586-a3cb-e5819314abfe-kube-api-access-xvzds\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.060218 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data-custom\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.060273 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-combined-ca-bundle\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.060335 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.060435 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-config-data\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.060456 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-combined-ca-bundle\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.060597 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7dfp\" (UniqueName: \"kubernetes.io/projected/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-kube-api-access-z7dfp\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.060671 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-config-data-custom\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.062813 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.066041 4845 scope.go:117] "RemoveContainer" containerID="cf0af5c5e3f564b6556befcc6a6f25d0bf34be4299c282d46a9c4ef4ba6b6020" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.069093 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-combined-ca-bundle\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.069183 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-config-data-custom\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.071915 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-config-data\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.072095 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.076452 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.080494 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.081621 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.084101 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7dfp\" (UniqueName: \"kubernetes.io/projected/6cfd78fb-8f69-43d4-9a58-f7e2f5d27958-kube-api-access-z7dfp\") pod \"heat-engine-658dbb4bcd-qn5fs\" (UID: \"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958\") " pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.120745 4845 scope.go:117] "RemoveContainer" containerID="fbe9c7e888f7c781c1053770c1b8f9cfc807348795f8c409217a85d5272ec120" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165057 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-config-data\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165159 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzds\" (UniqueName: \"kubernetes.io/projected/06e2175d-c446-4586-a3cb-e5819314abfe-kube-api-access-xvzds\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165256 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data-custom\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165279 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165330 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bjdd\" (UniqueName: \"kubernetes.io/projected/53bef472-98d8-47d6-9601-fa9bc8438a5d-kube-api-access-6bjdd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165364 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165386 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165435 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165457 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165552 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-combined-ca-bundle\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.165663 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-scripts\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.173553 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data-custom\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.173699 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.179580 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-combined-ca-bundle\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.193574 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.196870 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzds\" (UniqueName: \"kubernetes.io/projected/06e2175d-c446-4586-a3cb-e5819314abfe-kube-api-access-xvzds\") pod \"heat-cfnapi-6d94ddcf58-g79zr\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.217687 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.240828 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.268219 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-config-data\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.268314 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.268354 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bjdd\" (UniqueName: \"kubernetes.io/projected/53bef472-98d8-47d6-9601-fa9bc8438a5d-kube-api-access-6bjdd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.268376 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.268393 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.268425 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.268533 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-scripts\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.271557 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.274786 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.276162 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-scripts\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.276305 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.276871 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-config-data\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.280470 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.293879 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bjdd\" (UniqueName: \"kubernetes.io/projected/53bef472-98d8-47d6-9601-fa9bc8438a5d-kube-api-access-6bjdd\") pod \"ceilometer-0\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.409445 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.799498 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.893423 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-567746f76f-zjfmt" event={"ID":"30fbb4bb-1391-411d-adda-a41d223aed00","Type":"ContainerStarted","Data":"307d8b0b23eff351361b7a613060b4a588e27fc82cd046b95b3491796e7afc93"} Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.902899 4845 generic.go:334] "Generic (PLEG): container finished" podID="48aa6807-1e0b-4eab-8255-01c885a24550" containerID="5c7e1e0f5ba6836be4b2cc0a23514474d1930ec71f3bf7b3e6b27bbccac7ee40" exitCode=0 Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.902975 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48aa6807-1e0b-4eab-8255-01c885a24550","Type":"ContainerDied","Data":"5c7e1e0f5ba6836be4b2cc0a23514474d1930ec71f3bf7b3e6b27bbccac7ee40"} Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.903013 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"48aa6807-1e0b-4eab-8255-01c885a24550","Type":"ContainerDied","Data":"046426bd44987200c7af4b3e2c8d25a4f244d0fd82b246a7123cbf79584728b3"} Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.903027 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="046426bd44987200c7af4b3e2c8d25a4f244d0fd82b246a7123cbf79584728b3" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.920534 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-567746f76f-zjfmt" podStartSLOduration=8.920512271 podStartE2EDuration="8.920512271s" podCreationTimestamp="2026-02-02 10:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:00.911072279 +0000 UTC m=+1322.002473729" watchObservedRunningTime="2026-02-02 10:54:00.920512271 +0000 UTC m=+1322.011913721" Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.922466 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" event={"ID":"2ab561fd-1cd4-43c4-a09d-401ca966b4bb","Type":"ContainerDied","Data":"47eae6a3cd68dd92b0b808c46cdd7b757872d2c3608874a20759716d81ea4849"} Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.922411 4845 generic.go:334] "Generic (PLEG): container finished" podID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" containerID="47eae6a3cd68dd92b0b808c46cdd7b757872d2c3608874a20759716d81ea4849" exitCode=0 Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.927517 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3483568c-cbaa-4f63-94e5-36d1a9534d31","Type":"ContainerStarted","Data":"33cfaf5e3509b0c0eafd1b0df4a044108c0bac39fdab1e895132c057046c46c5"} Feb 02 10:54:00 crc kubenswrapper[4845]: I0202 10:54:00.960485 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.108187 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-httpd-run\") pod \"48aa6807-1e0b-4eab-8255-01c885a24550\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.110189 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-combined-ca-bundle\") pod \"48aa6807-1e0b-4eab-8255-01c885a24550\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.110707 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-scripts\") pod \"48aa6807-1e0b-4eab-8255-01c885a24550\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.110913 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-config-data\") pod \"48aa6807-1e0b-4eab-8255-01c885a24550\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.111178 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzv2r\" (UniqueName: \"kubernetes.io/projected/48aa6807-1e0b-4eab-8255-01c885a24550-kube-api-access-nzv2r\") pod \"48aa6807-1e0b-4eab-8255-01c885a24550\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.111214 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-public-tls-certs\") pod \"48aa6807-1e0b-4eab-8255-01c885a24550\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.111233 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-logs\") pod \"48aa6807-1e0b-4eab-8255-01c885a24550\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.111414 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"48aa6807-1e0b-4eab-8255-01c885a24550\" (UID: \"48aa6807-1e0b-4eab-8255-01c885a24550\") " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.113821 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "48aa6807-1e0b-4eab-8255-01c885a24550" (UID: "48aa6807-1e0b-4eab-8255-01c885a24550"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.120516 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-logs" (OuterVolumeSpecName: "logs") pod "48aa6807-1e0b-4eab-8255-01c885a24550" (UID: "48aa6807-1e0b-4eab-8255-01c885a24550"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.130936 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-655bdf96f4-zpj7r"] Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.135316 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48aa6807-1e0b-4eab-8255-01c885a24550-kube-api-access-nzv2r" (OuterVolumeSpecName: "kube-api-access-nzv2r") pod "48aa6807-1e0b-4eab-8255-01c885a24550" (UID: "48aa6807-1e0b-4eab-8255-01c885a24550"). InnerVolumeSpecName "kube-api-access-nzv2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.141704 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-scripts" (OuterVolumeSpecName: "scripts") pod "48aa6807-1e0b-4eab-8255-01c885a24550" (UID: "48aa6807-1e0b-4eab-8255-01c885a24550"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.169745 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-658dbb4bcd-qn5fs"] Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.198334 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48aa6807-1e0b-4eab-8255-01c885a24550" (UID: "48aa6807-1e0b-4eab-8255-01c885a24550"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.212228 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7" (OuterVolumeSpecName: "glance") pod "48aa6807-1e0b-4eab-8255-01c885a24550" (UID: "48aa6807-1e0b-4eab-8255-01c885a24550"). InnerVolumeSpecName "pvc-a16f116c-8f63-4ae9-a645-587add90fda7". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.215315 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzv2r\" (UniqueName: \"kubernetes.io/projected/48aa6807-1e0b-4eab-8255-01c885a24550-kube-api-access-nzv2r\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.215342 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.215370 4845 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") on node \"crc\" " Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.215381 4845 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48aa6807-1e0b-4eab-8255-01c885a24550-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.215393 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.215403 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.321742 4845 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.322510 4845 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a16f116c-8f63-4ae9-a645-587add90fda7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7") on node "crc" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.371451 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.383813 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d94ddcf58-g79zr"] Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.393284 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-config-data" (OuterVolumeSpecName: "config-data") pod "48aa6807-1e0b-4eab-8255-01c885a24550" (UID: "48aa6807-1e0b-4eab-8255-01c885a24550"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.405129 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "48aa6807-1e0b-4eab-8255-01c885a24550" (UID: "48aa6807-1e0b-4eab-8255-01c885a24550"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.419772 4845 reconciler_common.go:293] "Volume detached for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.419806 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.419816 4845 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48aa6807-1e0b-4eab-8255-01c885a24550-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.738725 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4430b5f-6421-41e2-b338-3b215c57957a" path="/var/lib/kubelet/pods/e4430b5f-6421-41e2-b338-3b215c57957a/volumes" Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.968005 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" event={"ID":"06e2175d-c446-4586-a3cb-e5819314abfe","Type":"ContainerStarted","Data":"5e9e84a85d37d5a663e5d2555b2bd6af1b0b56601697215185d84529a118289f"} Feb 02 10:54:01 crc kubenswrapper[4845]: I0202 10:54:01.971195 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-655bdf96f4-zpj7r" event={"ID":"1b30c63b-6407-4f1d-a393-c4f33a758db8","Type":"ContainerStarted","Data":"1958eba93e5442b126732bbc2709ccb6a5ad1ffec89971c483a8a27cda81d546"} Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.001506 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerStarted","Data":"3ed9e63dbb2ac681fa2a644a55a08157b6bb091be1c082086d50bfa19e192457"} Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.017145 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" event={"ID":"2ab561fd-1cd4-43c4-a09d-401ca966b4bb","Type":"ContainerStarted","Data":"f6562082a819ade2f17a46123a0743170963965766d739159cebc380dae9a85f"} Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.017924 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.026514 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3483568c-cbaa-4f63-94e5-36d1a9534d31","Type":"ContainerStarted","Data":"ff92ba859349704e56aa88a5d47c5542e780357aabff232513a8a1649b5f1a5e"} Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.033298 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.033299 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-658dbb4bcd-qn5fs" event={"ID":"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958","Type":"ContainerStarted","Data":"1ed2f6f188a24c838bbd168de4e2c1b317808190e41810b07cb51f63eaca0ef4"} Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.033448 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-658dbb4bcd-qn5fs" event={"ID":"6cfd78fb-8f69-43d4-9a58-f7e2f5d27958","Type":"ContainerStarted","Data":"56b7d4d125ac2aad8ac2eaecdd09195080a7b3a15aedeb0e8b2bfa938a97c790"} Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.033785 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.059645 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" podStartSLOduration=10.059623671 podStartE2EDuration="10.059623671s" podCreationTimestamp="2026-02-02 10:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:02.041026926 +0000 UTC m=+1323.132428406" watchObservedRunningTime="2026-02-02 10:54:02.059623671 +0000 UTC m=+1323.151025131" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.070136 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-658dbb4bcd-qn5fs" podStartSLOduration=3.070113763 podStartE2EDuration="3.070113763s" podCreationTimestamp="2026-02-02 10:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:02.06410051 +0000 UTC m=+1323.155501960" watchObservedRunningTime="2026-02-02 10:54:02.070113763 +0000 UTC m=+1323.161515213" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.136328 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.208720 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.223067 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:54:02 crc kubenswrapper[4845]: E0202 10:54:02.223586 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" containerName="glance-log" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.223598 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" containerName="glance-log" Feb 02 10:54:02 crc kubenswrapper[4845]: E0202 10:54:02.223649 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" containerName="glance-httpd" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.223656 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" containerName="glance-httpd" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.223875 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" containerName="glance-log" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.223991 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" containerName="glance-httpd" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.225357 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.230490 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.230744 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.236646 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.342014 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-config-data\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.342088 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.342164 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-logs\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.342201 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.342250 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.342289 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-scripts\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.342347 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65zv\" (UniqueName: \"kubernetes.io/projected/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-kube-api-access-f65zv\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.342423 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.444594 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.444685 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-config-data\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.444718 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.444775 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-logs\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.444802 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.444831 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.444861 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-scripts\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.444921 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f65zv\" (UniqueName: \"kubernetes.io/projected/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-kube-api-access-f65zv\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.446363 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-logs\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.446668 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.460356 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.460407 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ed60d0e4f5ad2fb51e67cadb4519054184ad51c31b402d173121c9411d32387/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.461762 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.462603 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-config-data\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.468385 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.468500 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-scripts\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.481856 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f65zv\" (UniqueName: \"kubernetes.io/projected/80eee60d-7cee-4b29-b022-9f5e8e5d6bdb-kube-api-access-f65zv\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.542486 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a16f116c-8f63-4ae9-a645-587add90fda7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a16f116c-8f63-4ae9-a645-587add90fda7\") pod \"glance-default-external-api-0\" (UID: \"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb\") " pod="openstack/glance-default-external-api-0" Feb 02 10:54:02 crc kubenswrapper[4845]: I0202 10:54:02.566587 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.058593 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3483568c-cbaa-4f63-94e5-36d1a9534d31","Type":"ContainerStarted","Data":"e3ffefa6e81103866221c0a24be7a59af18fdf6a36510dd2c2fd0a3099d37b45"} Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.070875 4845 generic.go:334] "Generic (PLEG): container finished" podID="381d0503-4113-48e1-a344-88e990400075" containerID="2ea8dbd44d235dcde26b7387e5ca4d94d64fb20bb5d89d73c2a48c90da6ef1d6" exitCode=0 Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.071943 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c658d9d4-mvn9b" event={"ID":"381d0503-4113-48e1-a344-88e990400075","Type":"ContainerDied","Data":"2ea8dbd44d235dcde26b7387e5ca4d94d64fb20bb5d89d73c2a48c90da6ef1d6"} Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.072759 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.115377 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.115354341 podStartE2EDuration="4.115354341s" podCreationTimestamp="2026-02-02 10:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:03.094419658 +0000 UTC m=+1324.185821108" watchObservedRunningTime="2026-02-02 10:54:03.115354341 +0000 UTC m=+1324.206755791" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.138019 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b475b44dc-fr2qw"] Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.168127 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7998b4fc87-n5g2f"] Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.170347 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.173733 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.174436 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.182037 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-b9b757468-zfd7s"] Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.198620 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7998b4fc87-n5g2f"] Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.251351 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6dcccd9c6c-tq64l"] Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.264406 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.265425 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6dcccd9c6c-tq64l"] Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.267062 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.267220 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.276764 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-public-tls-certs\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.278183 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-config-data-custom\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.278328 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-combined-ca-bundle\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.278412 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-config-data\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.278508 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2rb\" (UniqueName: \"kubernetes.io/projected/7dfab927-78ef-4105-a07b-a109690fda89-kube-api-access-dx2rb\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.278703 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-internal-tls-certs\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383012 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-public-tls-certs\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383079 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-config-data-custom\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383121 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-config-data-custom\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383142 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-combined-ca-bundle\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383179 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-config-data\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383209 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-public-tls-certs\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383247 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2rb\" (UniqueName: \"kubernetes.io/projected/7dfab927-78ef-4105-a07b-a109690fda89-kube-api-access-dx2rb\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383278 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-config-data\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383313 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-internal-tls-certs\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383357 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jbf\" (UniqueName: \"kubernetes.io/projected/1225250d-8a00-47d3-acea-856fa864dff5-kube-api-access-m4jbf\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383384 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-combined-ca-bundle\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.383406 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-internal-tls-certs\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.390386 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-public-tls-certs\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.390967 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-internal-tls-certs\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.391876 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-config-data\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.391911 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-combined-ca-bundle\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.395945 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dfab927-78ef-4105-a07b-a109690fda89-config-data-custom\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.404354 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2rb\" (UniqueName: \"kubernetes.io/projected/7dfab927-78ef-4105-a07b-a109690fda89-kube-api-access-dx2rb\") pod \"heat-api-7998b4fc87-n5g2f\" (UID: \"7dfab927-78ef-4105-a07b-a109690fda89\") " pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.485901 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-config-data-custom\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.485974 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-public-tls-certs\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.486027 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-config-data\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.486086 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-internal-tls-certs\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.486151 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jbf\" (UniqueName: \"kubernetes.io/projected/1225250d-8a00-47d3-acea-856fa864dff5-kube-api-access-m4jbf\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.486182 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-combined-ca-bundle\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.490982 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-internal-tls-certs\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.491348 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-config-data\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.492863 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-public-tls-certs\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.493569 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-combined-ca-bundle\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.497122 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.498652 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1225250d-8a00-47d3-acea-856fa864dff5-config-data-custom\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.528361 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jbf\" (UniqueName: \"kubernetes.io/projected/1225250d-8a00-47d3-acea-856fa864dff5-kube-api-access-m4jbf\") pod \"heat-cfnapi-6dcccd9c6c-tq64l\" (UID: \"1225250d-8a00-47d3-acea-856fa864dff5\") " pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.569026 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.569256 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" containerName="glance-log" containerID="cri-o://4f20fb73216b64db7b3a8f01b837e3181d5ffdb5d4d8bf409ca31ea2e79b0bcd" gracePeriod=30 Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.569713 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" containerName="glance-httpd" containerID="cri-o://e01c6288e6f46f1f5e6d28589b3c379a1a0749fccb437477b6b9e2e2597123a4" gracePeriod=30 Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.612861 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:03 crc kubenswrapper[4845]: I0202 10:54:03.739641 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48aa6807-1e0b-4eab-8255-01c885a24550" path="/var/lib/kubelet/pods/48aa6807-1e0b-4eab-8255-01c885a24550/volumes" Feb 02 10:54:04 crc kubenswrapper[4845]: I0202 10:54:04.098289 4845 generic.go:334] "Generic (PLEG): container finished" podID="7f69079e-af81-421c-870a-2a08c1b2420e" containerID="4f20fb73216b64db7b3a8f01b837e3181d5ffdb5d4d8bf409ca31ea2e79b0bcd" exitCode=143 Feb 02 10:54:04 crc kubenswrapper[4845]: I0202 10:54:04.098381 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f69079e-af81-421c-870a-2a08c1b2420e","Type":"ContainerDied","Data":"4f20fb73216b64db7b3a8f01b837e3181d5ffdb5d4d8bf409ca31ea2e79b0bcd"} Feb 02 10:54:04 crc kubenswrapper[4845]: I0202 10:54:04.866634 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.398434 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.554821 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc9m4\" (UniqueName: \"kubernetes.io/projected/381d0503-4113-48e1-a344-88e990400075-kube-api-access-sc9m4\") pod \"381d0503-4113-48e1-a344-88e990400075\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.554934 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-config\") pod \"381d0503-4113-48e1-a344-88e990400075\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.554984 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-httpd-config\") pod \"381d0503-4113-48e1-a344-88e990400075\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.555107 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-combined-ca-bundle\") pod \"381d0503-4113-48e1-a344-88e990400075\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.555354 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-ovndb-tls-certs\") pod \"381d0503-4113-48e1-a344-88e990400075\" (UID: \"381d0503-4113-48e1-a344-88e990400075\") " Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.594216 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "381d0503-4113-48e1-a344-88e990400075" (UID: "381d0503-4113-48e1-a344-88e990400075"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.608767 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7998b4fc87-n5g2f"] Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.629639 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381d0503-4113-48e1-a344-88e990400075-kube-api-access-sc9m4" (OuterVolumeSpecName: "kube-api-access-sc9m4") pod "381d0503-4113-48e1-a344-88e990400075" (UID: "381d0503-4113-48e1-a344-88e990400075"). InnerVolumeSpecName "kube-api-access-sc9m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.639095 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.659018 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc9m4\" (UniqueName: \"kubernetes.io/projected/381d0503-4113-48e1-a344-88e990400075-kube-api-access-sc9m4\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.659052 4845 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.790088 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6dcccd9c6c-tq64l"] Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.911386 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 10:54:05 crc kubenswrapper[4845]: I0202 10:54:05.996325 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "381d0503-4113-48e1-a344-88e990400075" (UID: "381d0503-4113-48e1-a344-88e990400075"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.074170 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.107030 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "381d0503-4113-48e1-a344-88e990400075" (UID: "381d0503-4113-48e1-a344-88e990400075"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.127688 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-config" (OuterVolumeSpecName: "config") pod "381d0503-4113-48e1-a344-88e990400075" (UID: "381d0503-4113-48e1-a344-88e990400075"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.176461 4845 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.176501 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/381d0503-4113-48e1-a344-88e990400075-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.191253 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7998b4fc87-n5g2f" event={"ID":"7dfab927-78ef-4105-a07b-a109690fda89","Type":"ContainerStarted","Data":"d579eeff24b4fe2a30800b3904deea4e676f86ceaf721aba5f0d2b2af4914cd5"} Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.203189 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b9b757468-zfd7s" event={"ID":"08e1823d-46cd-40c5-bea1-162473f9a4ce","Type":"ContainerStarted","Data":"9eb0cc21db22e7b0a6e9194e496c67f804362485d557f665096269f5e604e637"} Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.203385 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-b9b757468-zfd7s" podUID="08e1823d-46cd-40c5-bea1-162473f9a4ce" containerName="heat-cfnapi" containerID="cri-o://9eb0cc21db22e7b0a6e9194e496c67f804362485d557f665096269f5e604e637" gracePeriod=60 Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.204146 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.224023 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-b9b757468-zfd7s" podStartSLOduration=8.85305982 podStartE2EDuration="14.224003103s" podCreationTimestamp="2026-02-02 10:53:52 +0000 UTC" firstStartedPulling="2026-02-02 10:53:59.582508894 +0000 UTC m=+1320.673910344" lastFinishedPulling="2026-02-02 10:54:04.953452177 +0000 UTC m=+1326.044853627" observedRunningTime="2026-02-02 10:54:06.221921463 +0000 UTC m=+1327.313322933" watchObservedRunningTime="2026-02-02 10:54:06.224003103 +0000 UTC m=+1327.315404553" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.247568 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c658d9d4-mvn9b" event={"ID":"381d0503-4113-48e1-a344-88e990400075","Type":"ContainerDied","Data":"e0403804c19c7d887fabdb04dd94aa27e4ca3026135843fda8f93cf69fb0c8cd"} Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.247643 4845 scope.go:117] "RemoveContainer" containerID="0318987d82017a4372eee65da6e584e622e1a0531f87350923bd3ea8ad37c0e3" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.247832 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c658d9d4-mvn9b" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.254240 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" event={"ID":"1225250d-8a00-47d3-acea-856fa864dff5","Type":"ContainerStarted","Data":"f9eb7854a008987a6893bb48b3ca1c67ab3968d1bd26c03725333564ff086950"} Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.274673 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b475b44dc-fr2qw" event={"ID":"81ca8604-de7c-4752-8bda-89fccd0c1218","Type":"ContainerStarted","Data":"e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda"} Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.274859 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-b475b44dc-fr2qw" podUID="81ca8604-de7c-4752-8bda-89fccd0c1218" containerName="heat-api" containerID="cri-o://e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda" gracePeriod=60 Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.274958 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.291971 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" event={"ID":"06e2175d-c446-4586-a3cb-e5819314abfe","Type":"ContainerStarted","Data":"b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59"} Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.295186 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.312703 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-b475b44dc-fr2qw" podStartSLOduration=8.991389053 podStartE2EDuration="14.312678616s" podCreationTimestamp="2026-02-02 10:53:52 +0000 UTC" firstStartedPulling="2026-02-02 10:53:59.659230414 +0000 UTC m=+1320.750631864" lastFinishedPulling="2026-02-02 10:54:04.980519977 +0000 UTC m=+1326.071921427" observedRunningTime="2026-02-02 10:54:06.303916514 +0000 UTC m=+1327.395317964" watchObservedRunningTime="2026-02-02 10:54:06.312678616 +0000 UTC m=+1327.404080066" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.318480 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb","Type":"ContainerStarted","Data":"caa6241387363604f43a7866e079962e35eb6d7bf4fe0b831783629994d1d233"} Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.334492 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerStarted","Data":"65f19a5e53861afad5be0dc22811e405a7089640d2466a967d0205688cbae9c3"} Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.383082 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" podStartSLOduration=3.663436359 podStartE2EDuration="7.383054983s" podCreationTimestamp="2026-02-02 10:53:59 +0000 UTC" firstStartedPulling="2026-02-02 10:54:01.397150326 +0000 UTC m=+1322.488551776" lastFinishedPulling="2026-02-02 10:54:05.11676895 +0000 UTC m=+1326.208170400" observedRunningTime="2026-02-02 10:54:06.33434078 +0000 UTC m=+1327.425742230" watchObservedRunningTime="2026-02-02 10:54:06.383054983 +0000 UTC m=+1327.474456433" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.620052 4845 scope.go:117] "RemoveContainer" containerID="2ea8dbd44d235dcde26b7387e5ca4d94d64fb20bb5d89d73c2a48c90da6ef1d6" Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.620622 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c658d9d4-mvn9b"] Feb 02 10:54:06 crc kubenswrapper[4845]: I0202 10:54:06.635536 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c658d9d4-mvn9b"] Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.365035 4845 generic.go:334] "Generic (PLEG): container finished" podID="7f69079e-af81-421c-870a-2a08c1b2420e" containerID="e01c6288e6f46f1f5e6d28589b3c379a1a0749fccb437477b6b9e2e2597123a4" exitCode=0 Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.365310 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f69079e-af81-421c-870a-2a08c1b2420e","Type":"ContainerDied","Data":"e01c6288e6f46f1f5e6d28589b3c379a1a0749fccb437477b6b9e2e2597123a4"} Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.372342 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" event={"ID":"1225250d-8a00-47d3-acea-856fa864dff5","Type":"ContainerStarted","Data":"58e59e68a691afa56c0ba5b9ffddbdb9d05bb3d4917cb33aef8779ee193c6080"} Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.373388 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.391971 4845 generic.go:334] "Generic (PLEG): container finished" podID="06e2175d-c446-4586-a3cb-e5819314abfe" containerID="b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59" exitCode=1 Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.392055 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" event={"ID":"06e2175d-c446-4586-a3cb-e5819314abfe","Type":"ContainerDied","Data":"b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59"} Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.392826 4845 scope.go:117] "RemoveContainer" containerID="b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.398354 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" podStartSLOduration=4.398340617 podStartE2EDuration="4.398340617s" podCreationTimestamp="2026-02-02 10:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:07.397237515 +0000 UTC m=+1328.488638985" watchObservedRunningTime="2026-02-02 10:54:07.398340617 +0000 UTC m=+1328.489742067" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.406111 4845 generic.go:334] "Generic (PLEG): container finished" podID="1b30c63b-6407-4f1d-a393-c4f33a758db8" containerID="7c3cfa22f88c71170abec828b19bef4d61f6d07bacfd495c9500d6b092c37995" exitCode=1 Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.406212 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-655bdf96f4-zpj7r" event={"ID":"1b30c63b-6407-4f1d-a393-c4f33a758db8","Type":"ContainerDied","Data":"7c3cfa22f88c71170abec828b19bef4d61f6d07bacfd495c9500d6b092c37995"} Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.407052 4845 scope.go:117] "RemoveContainer" containerID="7c3cfa22f88c71170abec828b19bef4d61f6d07bacfd495c9500d6b092c37995" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.433166 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb","Type":"ContainerStarted","Data":"56081e1e9570ba420135229155c0ec53a579472990c940c108f334aa79c5a2cb"} Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.506174 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerStarted","Data":"1e8465847af87f5185fe9a371926a2dffc326ca101ce721ffdfe10ea1d45b3b5"} Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.508317 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7998b4fc87-n5g2f" event={"ID":"7dfab927-78ef-4105-a07b-a109690fda89","Type":"ContainerStarted","Data":"e1229872b3f4bfa1ee396eba8f47d9c69760c36de7fa47ea153df19a95651718"} Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.510029 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.563378 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7998b4fc87-n5g2f" podStartSLOduration=4.563354848 podStartE2EDuration="4.563354848s" podCreationTimestamp="2026-02-02 10:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:07.545088332 +0000 UTC m=+1328.636489792" watchObservedRunningTime="2026-02-02 10:54:07.563354848 +0000 UTC m=+1328.654756298" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.767643 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="381d0503-4113-48e1-a344-88e990400075" path="/var/lib/kubelet/pods/381d0503-4113-48e1-a344-88e990400075/volumes" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.820914 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.928722 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.928784 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-httpd-run\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.928991 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-scripts\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.929135 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-config-data\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.929171 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-combined-ca-bundle\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.929190 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-logs\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.929241 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.929268 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl7xq\" (UniqueName: \"kubernetes.io/projected/7f69079e-af81-421c-870a-2a08c1b2420e-kube-api-access-cl7xq\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.932758 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.933115 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-logs" (OuterVolumeSpecName: "logs") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.938699 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-scripts" (OuterVolumeSpecName: "scripts") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.951026 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f69079e-af81-421c-870a-2a08c1b2420e-kube-api-access-cl7xq" (OuterVolumeSpecName: "kube-api-access-cl7xq") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e"). InnerVolumeSpecName "kube-api-access-cl7xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:07 crc kubenswrapper[4845]: I0202 10:54:07.990127 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995" (OuterVolumeSpecName: "glance") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e"). InnerVolumeSpecName "pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.055606 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl7xq\" (UniqueName: \"kubernetes.io/projected/7f69079e-af81-421c-870a-2a08c1b2420e-kube-api-access-cl7xq\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.055658 4845 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") on node \"crc\" " Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.055674 4845 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.055685 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.055696 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f69079e-af81-421c-870a-2a08c1b2420e-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.059787 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:08 crc kubenswrapper[4845]: E0202 10:54:08.092702 4845 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs podName:7f69079e-af81-421c-870a-2a08c1b2420e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:08.59266454 +0000 UTC m=+1329.684065990 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e") : error deleting /var/lib/kubelet/pods/7f69079e-af81-421c-870a-2a08c1b2420e/volume-subpaths: remove /var/lib/kubelet/pods/7f69079e-af81-421c-870a-2a08c1b2420e/volume-subpaths: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.100247 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-config-data" (OuterVolumeSpecName: "config-data") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.134531 4845 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.134734 4845 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995") on node "crc" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.157804 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.158084 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.158179 4845 reconciler_common.go:293] "Volume detached for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.245152 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.337537 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qspp8"] Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.341207 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" podUID="f57453e0-7229-4521-9d4a-769dc8c888fa" containerName="dnsmasq-dns" containerID="cri-o://e5275c3566c176f1596ba660dd53acc1de661d183bfb3ade1e8619590805afd2" gracePeriod=10 Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.591979 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerStarted","Data":"3ebbd0d0a7130e66b17b81004ea1929c3e2d40fef4c6767da3df2300291382c2"} Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.605739 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ca8604_de7c_4752_8bda_89fccd0c1218.slice/crio-conmon-e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ca8604_de7c_4752_8bda_89fccd0c1218.slice/crio-conmon-e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.605783 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ca8604_de7c_4752_8bda_89fccd0c1218.slice/crio-e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ca8604_de7c_4752_8bda_89fccd0c1218.slice/crio-e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.607416 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.607831 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f69079e-af81-421c-870a-2a08c1b2420e","Type":"ContainerDied","Data":"4d7f9cbfa39969e34e788860572765e355857e65be648d766beb851dfaf208a7"} Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.607895 4845 scope.go:117] "RemoveContainer" containerID="e01c6288e6f46f1f5e6d28589b3c379a1a0749fccb437477b6b9e2e2597123a4" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.608007 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.608956 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e2175d_c446_4586_a3cb_e5819314abfe.slice/crio-conmon-b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e2175d_c446_4586_a3cb_e5819314abfe.slice/crio-conmon-b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.609001 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e2175d_c446_4586_a3cb_e5819314abfe.slice/crio-b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e2175d_c446_4586_a3cb_e5819314abfe.slice/crio-b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.609032 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b30c63b_6407_4f1d_a393_c4f33a758db8.slice/crio-conmon-7c3cfa22f88c71170abec828b19bef4d61f6d07bacfd495c9500d6b092c37995.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b30c63b_6407_4f1d_a393_c4f33a758db8.slice/crio-conmon-7c3cfa22f88c71170abec828b19bef4d61f6d07bacfd495c9500d6b092c37995.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.611574 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b30c63b_6407_4f1d_a393_c4f33a758db8.slice/crio-7c3cfa22f88c71170abec828b19bef4d61f6d07bacfd495c9500d6b092c37995.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b30c63b_6407_4f1d_a393_c4f33a758db8.slice/crio-7c3cfa22f88c71170abec828b19bef4d61f6d07bacfd495c9500d6b092c37995.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.620120 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e2175d_c446_4586_a3cb_e5819314abfe.slice/crio-conmon-2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e2175d_c446_4586_a3cb_e5819314abfe.slice/crio-conmon-2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.620380 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b30c63b_6407_4f1d_a393_c4f33a758db8.slice/crio-conmon-515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b30c63b_6407_4f1d_a393_c4f33a758db8.slice/crio-conmon-515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.620463 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e2175d_c446_4586_a3cb_e5819314abfe.slice/crio-2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e2175d_c446_4586_a3cb_e5819314abfe.slice/crio-2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: W0202 10:54:08.620654 4845 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b30c63b_6407_4f1d_a393_c4f33a758db8.slice/crio-515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b30c63b_6407_4f1d_a393_c4f33a758db8.slice/crio-515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8.scope: no such file or directory Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.630516 4845 generic.go:334] "Generic (PLEG): container finished" podID="06105adf-bd97-410f-922f-cb54a637955d" containerID="ddf1463358b0ba98c5bf9d38609a756d1bcc25cada5f4b3714a84897700e8709" exitCode=137 Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.630606 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"06105adf-bd97-410f-922f-cb54a637955d","Type":"ContainerDied","Data":"ddf1463358b0ba98c5bf9d38609a756d1bcc25cada5f4b3714a84897700e8709"} Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.658073 4845 generic.go:334] "Generic (PLEG): container finished" podID="81ca8604-de7c-4752-8bda-89fccd0c1218" containerID="e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda" exitCode=0 Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.658188 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b475b44dc-fr2qw" event={"ID":"81ca8604-de7c-4752-8bda-89fccd0c1218","Type":"ContainerDied","Data":"e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda"} Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.658910 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b475b44dc-fr2qw" event={"ID":"81ca8604-de7c-4752-8bda-89fccd0c1218","Type":"ContainerDied","Data":"d41fe68659e16adfad7a630c185917e34c6be9d15f8fc9f1160f015fdca6072a"} Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.659015 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b475b44dc-fr2qw" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.672782 4845 generic.go:334] "Generic (PLEG): container finished" podID="06e2175d-c446-4586-a3cb-e5819314abfe" containerID="2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be" exitCode=1 Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.672843 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" event={"ID":"06e2175d-c446-4586-a3cb-e5819314abfe","Type":"ContainerDied","Data":"2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be"} Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.673661 4845 scope.go:117] "RemoveContainer" containerID="2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be" Feb 02 10:54:08 crc kubenswrapper[4845]: E0202 10:54:08.674017 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d94ddcf58-g79zr_openstack(06e2175d-c446-4586-a3cb-e5819314abfe)\"" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.674968 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-combined-ca-bundle\") pod \"81ca8604-de7c-4752-8bda-89fccd0c1218\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.675063 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data\") pod \"81ca8604-de7c-4752-8bda-89fccd0c1218\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.675341 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c57d\" (UniqueName: \"kubernetes.io/projected/81ca8604-de7c-4752-8bda-89fccd0c1218-kube-api-access-7c57d\") pod \"81ca8604-de7c-4752-8bda-89fccd0c1218\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.675405 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs\") pod \"7f69079e-af81-421c-870a-2a08c1b2420e\" (UID: \"7f69079e-af81-421c-870a-2a08c1b2420e\") " Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.675444 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data-custom\") pod \"81ca8604-de7c-4752-8bda-89fccd0c1218\" (UID: \"81ca8604-de7c-4752-8bda-89fccd0c1218\") " Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.702473 4845 generic.go:334] "Generic (PLEG): container finished" podID="f57453e0-7229-4521-9d4a-769dc8c888fa" containerID="e5275c3566c176f1596ba660dd53acc1de661d183bfb3ade1e8619590805afd2" exitCode=0 Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.702524 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" event={"ID":"f57453e0-7229-4521-9d4a-769dc8c888fa","Type":"ContainerDied","Data":"e5275c3566c176f1596ba660dd53acc1de661d183bfb3ade1e8619590805afd2"} Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.705955 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ca8604-de7c-4752-8bda-89fccd0c1218-kube-api-access-7c57d" (OuterVolumeSpecName: "kube-api-access-7c57d") pod "81ca8604-de7c-4752-8bda-89fccd0c1218" (UID: "81ca8604-de7c-4752-8bda-89fccd0c1218"). InnerVolumeSpecName "kube-api-access-7c57d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.710093 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81ca8604-de7c-4752-8bda-89fccd0c1218" (UID: "81ca8604-de7c-4752-8bda-89fccd0c1218"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.729744 4845 scope.go:117] "RemoveContainer" containerID="4f20fb73216b64db7b3a8f01b837e3181d5ffdb5d4d8bf409ca31ea2e79b0bcd" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.744085 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7f69079e-af81-421c-870a-2a08c1b2420e" (UID: "7f69079e-af81-421c-870a-2a08c1b2420e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.748296 4845 generic.go:334] "Generic (PLEG): container finished" podID="1b30c63b-6407-4f1d-a393-c4f33a758db8" containerID="515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8" exitCode=1 Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.748473 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-655bdf96f4-zpj7r" event={"ID":"1b30c63b-6407-4f1d-a393-c4f33a758db8","Type":"ContainerDied","Data":"515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8"} Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.750187 4845 scope.go:117] "RemoveContainer" containerID="515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8" Feb 02 10:54:08 crc kubenswrapper[4845]: E0202 10:54:08.750476 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-655bdf96f4-zpj7r_openstack(1b30c63b-6407-4f1d-a393-c4f33a758db8)\"" pod="openstack/heat-api-655bdf96f4-zpj7r" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.773974 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80eee60d-7cee-4b29-b022-9f5e8e5d6bdb","Type":"ContainerStarted","Data":"e8d6d33865830ba7c7e5872dbdebead945559e5fed30e6a6855364535c7f2acd"} Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.780840 4845 scope.go:117] "RemoveContainer" containerID="e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.787497 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ca8604-de7c-4752-8bda-89fccd0c1218" (UID: "81ca8604-de7c-4752-8bda-89fccd0c1218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.787526 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c57d\" (UniqueName: \"kubernetes.io/projected/81ca8604-de7c-4752-8bda-89fccd0c1218-kube-api-access-7c57d\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.787545 4845 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f69079e-af81-421c-870a-2a08c1b2420e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.787554 4845 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.819835 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data" (OuterVolumeSpecName: "config-data") pod "81ca8604-de7c-4752-8bda-89fccd0c1218" (UID: "81ca8604-de7c-4752-8bda-89fccd0c1218"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.827373 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.827351895 podStartE2EDuration="6.827351895s" podCreationTimestamp="2026-02-02 10:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:08.807262226 +0000 UTC m=+1329.898663676" watchObservedRunningTime="2026-02-02 10:54:08.827351895 +0000 UTC m=+1329.918753345" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.843341 4845 scope.go:117] "RemoveContainer" containerID="e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda" Feb 02 10:54:08 crc kubenswrapper[4845]: E0202 10:54:08.845014 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda\": container with ID starting with e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda not found: ID does not exist" containerID="e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.845078 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda"} err="failed to get container status \"e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda\": rpc error: code = NotFound desc = could not find container \"e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda\": container with ID starting with e4ef4309ad042f4deb3e676cecfdbf7fbe26cbd2cdfa1e7701941c8c44343dda not found: ID does not exist" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.845111 4845 scope.go:117] "RemoveContainer" containerID="b6d861f3d4f1122d7cbdb51df30dffc6c15caaf7e9adea6c717cc462a2d36e59" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.890388 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: I0202 10:54:08.890430 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca8604-de7c-4752-8bda-89fccd0c1218-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:08 crc kubenswrapper[4845]: E0202 10:54:08.940269 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod381d0503_4113_48e1_a344_88e990400075.slice/crio-conmon-2ea8dbd44d235dcde26b7387e5ca4d94d64fb20bb5d89d73c2a48c90da6ef1d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod381d0503_4113_48e1_a344_88e990400075.slice/crio-e0403804c19c7d887fabdb04dd94aa27e4ca3026135843fda8f93cf69fb0c8cd\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f69079e_af81_421c_870a_2a08c1b2420e.slice/crio-e01c6288e6f46f1f5e6d28589b3c379a1a0749fccb437477b6b9e2e2597123a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06105adf_bd97_410f_922f_cb54a637955d.slice/crio-conmon-ddf1463358b0ba98c5bf9d38609a756d1bcc25cada5f4b3714a84897700e8709.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f69079e_af81_421c_870a_2a08c1b2420e.slice/crio-conmon-4f20fb73216b64db7b3a8f01b837e3181d5ffdb5d4d8bf409ca31ea2e79b0bcd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48aa6807_1e0b_4eab_8255_01c885a24550.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod381d0503_4113_48e1_a344_88e990400075.slice/crio-2ea8dbd44d235dcde26b7387e5ca4d94d64fb20bb5d89d73c2a48c90da6ef1d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f69079e_af81_421c_870a_2a08c1b2420e.slice/crio-conmon-e01c6288e6f46f1f5e6d28589b3c379a1a0749fccb437477b6b9e2e2597123a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57453e0_7229_4521_9d4a_769dc8c888fa.slice/crio-e5275c3566c176f1596ba660dd53acc1de661d183bfb3ade1e8619590805afd2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f69079e_af81_421c_870a_2a08c1b2420e.slice/crio-4f20fb73216b64db7b3a8f01b837e3181d5ffdb5d4d8bf409ca31ea2e79b0bcd.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.060657 4845 scope.go:117] "RemoveContainer" containerID="7c3cfa22f88c71170abec828b19bef4d61f6d07bacfd495c9500d6b092c37995" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.316043 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.325690 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.337689 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.340751 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.341532 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-swift-storage-0\") pod \"f57453e0-7229-4521-9d4a-769dc8c888fa\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.341562 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q98wr\" (UniqueName: \"kubernetes.io/projected/f57453e0-7229-4521-9d4a-769dc8c888fa-kube-api-access-q98wr\") pod \"f57453e0-7229-4521-9d4a-769dc8c888fa\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.341782 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-sb\") pod \"f57453e0-7229-4521-9d4a-769dc8c888fa\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.341897 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-svc\") pod \"f57453e0-7229-4521-9d4a-769dc8c888fa\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.341921 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-config\") pod \"f57453e0-7229-4521-9d4a-769dc8c888fa\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.341994 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-nb\") pod \"f57453e0-7229-4521-9d4a-769dc8c888fa\" (UID: \"f57453e0-7229-4521-9d4a-769dc8c888fa\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.350240 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57453e0-7229-4521-9d4a-769dc8c888fa-kube-api-access-q98wr" (OuterVolumeSpecName: "kube-api-access-q98wr") pod "f57453e0-7229-4521-9d4a-769dc8c888fa" (UID: "f57453e0-7229-4521-9d4a-769dc8c888fa"). InnerVolumeSpecName "kube-api-access-q98wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.450414 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8jww\" (UniqueName: \"kubernetes.io/projected/06105adf-bd97-410f-922f-cb54a637955d-kube-api-access-k8jww\") pod \"06105adf-bd97-410f-922f-cb54a637955d\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.450609 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data\") pod \"06105adf-bd97-410f-922f-cb54a637955d\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.450873 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-combined-ca-bundle\") pod \"06105adf-bd97-410f-922f-cb54a637955d\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.450984 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data-custom\") pod \"06105adf-bd97-410f-922f-cb54a637955d\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.451009 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-scripts\") pod \"06105adf-bd97-410f-922f-cb54a637955d\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.451610 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q98wr\" (UniqueName: \"kubernetes.io/projected/f57453e0-7229-4521-9d4a-769dc8c888fa-kube-api-access-q98wr\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.459218 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "06105adf-bd97-410f-922f-cb54a637955d" (UID: "06105adf-bd97-410f-922f-cb54a637955d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.476561 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f57453e0-7229-4521-9d4a-769dc8c888fa" (UID: "f57453e0-7229-4521-9d4a-769dc8c888fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.487842 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b475b44dc-fr2qw"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.488252 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06105adf-bd97-410f-922f-cb54a637955d-kube-api-access-k8jww" (OuterVolumeSpecName: "kube-api-access-k8jww") pod "06105adf-bd97-410f-922f-cb54a637955d" (UID: "06105adf-bd97-410f-922f-cb54a637955d"). InnerVolumeSpecName "kube-api-access-k8jww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.488764 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-scripts" (OuterVolumeSpecName: "scripts") pod "06105adf-bd97-410f-922f-cb54a637955d" (UID: "06105adf-bd97-410f-922f-cb54a637955d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.503968 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06105adf-bd97-410f-922f-cb54a637955d" (UID: "06105adf-bd97-410f-922f-cb54a637955d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.517750 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-config" (OuterVolumeSpecName: "config") pod "f57453e0-7229-4521-9d4a-769dc8c888fa" (UID: "f57453e0-7229-4521-9d4a-769dc8c888fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.547928 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-b475b44dc-fr2qw"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.549080 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f57453e0-7229-4521-9d4a-769dc8c888fa" (UID: "f57453e0-7229-4521-9d4a-769dc8c888fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.550379 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f57453e0-7229-4521-9d4a-769dc8c888fa" (UID: "f57453e0-7229-4521-9d4a-769dc8c888fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.553356 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06105adf-bd97-410f-922f-cb54a637955d-logs\") pod \"06105adf-bd97-410f-922f-cb54a637955d\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.553582 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06105adf-bd97-410f-922f-cb54a637955d-etc-machine-id\") pod \"06105adf-bd97-410f-922f-cb54a637955d\" (UID: \"06105adf-bd97-410f-922f-cb54a637955d\") " Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.558251 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06105adf-bd97-410f-922f-cb54a637955d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "06105adf-bd97-410f-922f-cb54a637955d" (UID: "06105adf-bd97-410f-922f-cb54a637955d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.561852 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.563384 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.563557 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.563666 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.563747 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.562387 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06105adf-bd97-410f-922f-cb54a637955d-logs" (OuterVolumeSpecName: "logs") pod "06105adf-bd97-410f-922f-cb54a637955d" (UID: "06105adf-bd97-410f-922f-cb54a637955d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.563953 4845 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.564039 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.564144 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8jww\" (UniqueName: \"kubernetes.io/projected/06105adf-bd97-410f-922f-cb54a637955d-kube-api-access-k8jww\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.572818 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573266 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381d0503-4113-48e1-a344-88e990400075" containerName="neutron-api" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573282 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="381d0503-4113-48e1-a344-88e990400075" containerName="neutron-api" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573295 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" containerName="glance-log" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573301 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" containerName="glance-log" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573322 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57453e0-7229-4521-9d4a-769dc8c888fa" containerName="init" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573329 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57453e0-7229-4521-9d4a-769dc8c888fa" containerName="init" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573348 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ca8604-de7c-4752-8bda-89fccd0c1218" containerName="heat-api" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573354 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ca8604-de7c-4752-8bda-89fccd0c1218" containerName="heat-api" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573367 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" containerName="glance-httpd" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573372 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" containerName="glance-httpd" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573383 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57453e0-7229-4521-9d4a-769dc8c888fa" containerName="dnsmasq-dns" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573388 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57453e0-7229-4521-9d4a-769dc8c888fa" containerName="dnsmasq-dns" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573395 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06105adf-bd97-410f-922f-cb54a637955d" containerName="cinder-api" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573400 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="06105adf-bd97-410f-922f-cb54a637955d" containerName="cinder-api" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573415 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381d0503-4113-48e1-a344-88e990400075" containerName="neutron-httpd" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573420 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="381d0503-4113-48e1-a344-88e990400075" containerName="neutron-httpd" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.573439 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06105adf-bd97-410f-922f-cb54a637955d" containerName="cinder-api-log" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573444 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="06105adf-bd97-410f-922f-cb54a637955d" containerName="cinder-api-log" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573697 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="06105adf-bd97-410f-922f-cb54a637955d" containerName="cinder-api" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573712 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57453e0-7229-4521-9d4a-769dc8c888fa" containerName="dnsmasq-dns" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573720 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="06105adf-bd97-410f-922f-cb54a637955d" containerName="cinder-api-log" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573729 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="381d0503-4113-48e1-a344-88e990400075" containerName="neutron-api" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573735 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" containerName="glance-httpd" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573742 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ca8604-de7c-4752-8bda-89fccd0c1218" containerName="heat-api" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573757 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="381d0503-4113-48e1-a344-88e990400075" containerName="neutron-httpd" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.573777 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" containerName="glance-log" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.576412 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.578557 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.578762 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.582955 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data" (OuterVolumeSpecName: "config-data") pod "06105adf-bd97-410f-922f-cb54a637955d" (UID: "06105adf-bd97-410f-922f-cb54a637955d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.609785 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.611343 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f57453e0-7229-4521-9d4a-769dc8c888fa" (UID: "f57453e0-7229-4521-9d4a-769dc8c888fa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.686871 4845 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f57453e0-7229-4521-9d4a-769dc8c888fa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.687225 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06105adf-bd97-410f-922f-cb54a637955d-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.687242 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06105adf-bd97-410f-922f-cb54a637955d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.687251 4845 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06105adf-bd97-410f-922f-cb54a637955d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.740364 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f69079e-af81-421c-870a-2a08c1b2420e" path="/var/lib/kubelet/pods/7f69079e-af81-421c-870a-2a08c1b2420e/volumes" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.741467 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ca8604-de7c-4752-8bda-89fccd0c1218" path="/var/lib/kubelet/pods/81ca8604-de7c-4752-8bda-89fccd0c1218/volumes" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.788800 4845 scope.go:117] "RemoveContainer" containerID="2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.789370 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d94ddcf58-g79zr_openstack(06e2175d-c446-4586-a3cb-e5819314abfe)\"" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.792741 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.792834 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.792983 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbdeff72-81f9-4063-8704-d97b21e01b82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.793165 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdeff72-81f9-4063-8704-d97b21e01b82-logs\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.793279 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.793749 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbwxq\" (UniqueName: \"kubernetes.io/projected/fbdeff72-81f9-4063-8704-d97b21e01b82-kube-api-access-sbwxq\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.793810 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.793840 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.794154 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.794251 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qspp8" event={"ID":"f57453e0-7229-4521-9d4a-769dc8c888fa","Type":"ContainerDied","Data":"4772888bdcd0116c54162c4f207b2005eb03b8a93b86ecf4467b09a24d45e538"} Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.794332 4845 scope.go:117] "RemoveContainer" containerID="e5275c3566c176f1596ba660dd53acc1de661d183bfb3ade1e8619590805afd2" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.798865 4845 scope.go:117] "RemoveContainer" containerID="515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8" Feb 02 10:54:09 crc kubenswrapper[4845]: E0202 10:54:09.799241 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-655bdf96f4-zpj7r_openstack(1b30c63b-6407-4f1d-a393-c4f33a758db8)\"" pod="openstack/heat-api-655bdf96f4-zpj7r" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.813377 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.813492 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"06105adf-bd97-410f-922f-cb54a637955d","Type":"ContainerDied","Data":"10b5879f559c2a482eb1195a6f65f86b7381b0fca31a6c70d7384ff0ddf1d778"} Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.859081 4845 scope.go:117] "RemoveContainer" containerID="de805387ebe60937953bfa8ca82aa39520aa199cff779fd5003f9f946eb62840" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.876818 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qspp8"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.897281 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbwxq\" (UniqueName: \"kubernetes.io/projected/fbdeff72-81f9-4063-8704-d97b21e01b82-kube-api-access-sbwxq\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.897325 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.897348 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.897382 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.897409 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.897470 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbdeff72-81f9-4063-8704-d97b21e01b82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.897525 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdeff72-81f9-4063-8704-d97b21e01b82-logs\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.897611 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.898213 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbdeff72-81f9-4063-8704-d97b21e01b82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.898966 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdeff72-81f9-4063-8704-d97b21e01b82-logs\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.904221 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qspp8"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.906797 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.907491 4845 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.907528 4845 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2a65ad887e305c92ac1c8235cc9c5fc327f1ea7ce91b9974356e11ee00bc2f81/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.908250 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.921660 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.925411 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdeff72-81f9-4063-8704-d97b21e01b82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.934977 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.943307 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbwxq\" (UniqueName: \"kubernetes.io/projected/fbdeff72-81f9-4063-8704-d97b21e01b82-kube-api-access-sbwxq\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.967136 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.968708 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fba07dd-e1ed-404b-8436-a4d0a8acd995\") pod \"glance-default-internal-api-0\" (UID: \"fbdeff72-81f9-4063-8704-d97b21e01b82\") " pod="openstack/glance-default-internal-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.979135 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.981087 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.984130 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.985500 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 10:54:09 crc kubenswrapper[4845]: I0202 10:54:09.985716 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.000051 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.106784 4845 scope.go:117] "RemoveContainer" containerID="ddf1463358b0ba98c5bf9d38609a756d1bcc25cada5f4b3714a84897700e8709" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109437 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7hn\" (UniqueName: \"kubernetes.io/projected/1800fe94-c9b9-4a5a-963a-75d82a4eab94-kube-api-access-8r7hn\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109498 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109627 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1800fe94-c9b9-4a5a-963a-75d82a4eab94-logs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109696 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1800fe94-c9b9-4a5a-963a-75d82a4eab94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109715 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-config-data\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109750 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109797 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109899 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-config-data-custom\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.109949 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-scripts\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.159575 4845 scope.go:117] "RemoveContainer" containerID="0b451f378e885ddb27d4b9a42dcd386c2a29d3bf05ddbf776583d1ff3fa31571" Feb 02 10:54:10 crc kubenswrapper[4845]: E0202 10:54:10.192662 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.211864 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.211961 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-config-data-custom\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.212040 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-scripts\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.212145 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7hn\" (UniqueName: \"kubernetes.io/projected/1800fe94-c9b9-4a5a-963a-75d82a4eab94-kube-api-access-8r7hn\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.212180 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.212290 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1800fe94-c9b9-4a5a-963a-75d82a4eab94-logs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.212369 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1800fe94-c9b9-4a5a-963a-75d82a4eab94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.212400 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-config-data\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.212445 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.213044 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1800fe94-c9b9-4a5a-963a-75d82a4eab94-logs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.213201 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1800fe94-c9b9-4a5a-963a-75d82a4eab94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.218295 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-config-data\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.218645 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.218712 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.218950 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.219000 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.219190 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-config-data-custom\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.220918 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-scripts\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.221352 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.236435 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.242183 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.242274 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.249374 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7hn\" (UniqueName: \"kubernetes.io/projected/1800fe94-c9b9-4a5a-963a-75d82a4eab94-kube-api-access-8r7hn\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.249731 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800fe94-c9b9-4a5a-963a-75d82a4eab94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1800fe94-c9b9-4a5a-963a-75d82a4eab94\") " pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.461439 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.859746 4845 scope.go:117] "RemoveContainer" containerID="2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be" Feb 02 10:54:10 crc kubenswrapper[4845]: E0202 10:54:10.861257 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d94ddcf58-g79zr_openstack(06e2175d-c446-4586-a3cb-e5819314abfe)\"" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.861701 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="ceilometer-central-agent" containerID="cri-o://65f19a5e53861afad5be0dc22811e405a7089640d2466a967d0205688cbae9c3" gracePeriod=30 Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.861913 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerStarted","Data":"f97ac7a8db772c50e60d2fbaebaa59d3f1748dd362df2a5683272ada45ea3c75"} Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.862788 4845 scope.go:117] "RemoveContainer" containerID="515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8" Feb 02 10:54:10 crc kubenswrapper[4845]: E0202 10:54:10.863126 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-655bdf96f4-zpj7r_openstack(1b30c63b-6407-4f1d-a393-c4f33a758db8)\"" pod="openstack/heat-api-655bdf96f4-zpj7r" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.863696 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.864278 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="sg-core" containerID="cri-o://3ebbd0d0a7130e66b17b81004ea1929c3e2d40fef4c6767da3df2300291382c2" gracePeriod=30 Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.864468 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="ceilometer-notification-agent" containerID="cri-o://1e8465847af87f5185fe9a371926a2dffc326ca101ce721ffdfe10ea1d45b3b5" gracePeriod=30 Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.867728 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="proxy-httpd" containerID="cri-o://f97ac7a8db772c50e60d2fbaebaa59d3f1748dd362df2a5683272ada45ea3c75" gracePeriod=30 Feb 02 10:54:10 crc kubenswrapper[4845]: I0202 10:54:10.941639 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.508295479 podStartE2EDuration="11.941616833s" podCreationTimestamp="2026-02-02 10:53:59 +0000 UTC" firstStartedPulling="2026-02-02 10:54:01.43481625 +0000 UTC m=+1322.526217700" lastFinishedPulling="2026-02-02 10:54:09.868137604 +0000 UTC m=+1330.959539054" observedRunningTime="2026-02-02 10:54:10.924678076 +0000 UTC m=+1332.016079526" watchObservedRunningTime="2026-02-02 10:54:10.941616833 +0000 UTC m=+1332.033018283" Feb 02 10:54:10 crc kubenswrapper[4845]: W0202 10:54:10.963618 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbdeff72_81f9_4063_8704_d97b21e01b82.slice/crio-f212d7b4211cc3dd8d32e4e27b7705352f55de84f33307cc1330c2a7479ce163 WatchSource:0}: Error finding container f212d7b4211cc3dd8d32e4e27b7705352f55de84f33307cc1330c2a7479ce163: Status 404 returned error can't find the container with id f212d7b4211cc3dd8d32e4e27b7705352f55de84f33307cc1330c2a7479ce163 Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.000136 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 10:54:11 crc kubenswrapper[4845]: W0202 10:54:11.208631 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1800fe94_c9b9_4a5a_963a_75d82a4eab94.slice/crio-763a710669e6fdae557c43f1cd05064fb8856e301fa761483c58eb11a44ecf07 WatchSource:0}: Error finding container 763a710669e6fdae557c43f1cd05064fb8856e301fa761483c58eb11a44ecf07: Status 404 returned error can't find the container with id 763a710669e6fdae557c43f1cd05064fb8856e301fa761483c58eb11a44ecf07 Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.223142 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.729687 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06105adf-bd97-410f-922f-cb54a637955d" path="/var/lib/kubelet/pods/06105adf-bd97-410f-922f-cb54a637955d/volumes" Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.730862 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57453e0-7229-4521-9d4a-769dc8c888fa" path="/var/lib/kubelet/pods/f57453e0-7229-4521-9d4a-769dc8c888fa/volumes" Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.882281 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1800fe94-c9b9-4a5a-963a-75d82a4eab94","Type":"ContainerStarted","Data":"763a710669e6fdae557c43f1cd05064fb8856e301fa761483c58eb11a44ecf07"} Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.886182 4845 generic.go:334] "Generic (PLEG): container finished" podID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerID="f97ac7a8db772c50e60d2fbaebaa59d3f1748dd362df2a5683272ada45ea3c75" exitCode=0 Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.886214 4845 generic.go:334] "Generic (PLEG): container finished" podID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerID="3ebbd0d0a7130e66b17b81004ea1929c3e2d40fef4c6767da3df2300291382c2" exitCode=2 Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.886223 4845 generic.go:334] "Generic (PLEG): container finished" podID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerID="1e8465847af87f5185fe9a371926a2dffc326ca101ce721ffdfe10ea1d45b3b5" exitCode=0 Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.886255 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerDied","Data":"f97ac7a8db772c50e60d2fbaebaa59d3f1748dd362df2a5683272ada45ea3c75"} Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.886323 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerDied","Data":"3ebbd0d0a7130e66b17b81004ea1929c3e2d40fef4c6767da3df2300291382c2"} Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.886340 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerDied","Data":"1e8465847af87f5185fe9a371926a2dffc326ca101ce721ffdfe10ea1d45b3b5"} Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.888013 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbdeff72-81f9-4063-8704-d97b21e01b82","Type":"ContainerStarted","Data":"95b6a6e8c90dd7982731049877f0620a009fbc1c4cb5abb5fe80f438d47aa8b1"} Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.888051 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbdeff72-81f9-4063-8704-d97b21e01b82","Type":"ContainerStarted","Data":"f212d7b4211cc3dd8d32e4e27b7705352f55de84f33307cc1330c2a7479ce163"} Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.889210 4845 scope.go:117] "RemoveContainer" containerID="2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be" Feb 02 10:54:11 crc kubenswrapper[4845]: I0202 10:54:11.889484 4845 scope.go:117] "RemoveContainer" containerID="515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8" Feb 02 10:54:11 crc kubenswrapper[4845]: E0202 10:54:11.889573 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d94ddcf58-g79zr_openstack(06e2175d-c446-4586-a3cb-e5819314abfe)\"" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" Feb 02 10:54:11 crc kubenswrapper[4845]: E0202 10:54:11.889916 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-655bdf96f4-zpj7r_openstack(1b30c63b-6407-4f1d-a393-c4f33a758db8)\"" pod="openstack/heat-api-655bdf96f4-zpj7r" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.567878 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.568514 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.623792 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.626101 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.904215 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbdeff72-81f9-4063-8704-d97b21e01b82","Type":"ContainerStarted","Data":"8c275e40d70032eaf110681fe17246e3822b17ae904b3c83b5277490b47b3543"} Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.907031 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1800fe94-c9b9-4a5a-963a-75d82a4eab94","Type":"ContainerStarted","Data":"481de43b9e5202c6f309950a865e8f57a5f190ad0427bae791b66d535ea14cd0"} Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.907074 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1800fe94-c9b9-4a5a-963a-75d82a4eab94","Type":"ContainerStarted","Data":"14b905db344b97fb6fd4ec19c9968f2d05e85faf5dc8d6ecb7839ad6c0cb2410"} Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.907554 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.907594 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.927512 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.927480946 podStartE2EDuration="3.927480946s" podCreationTimestamp="2026-02-02 10:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:12.923659756 +0000 UTC m=+1334.015061226" watchObservedRunningTime="2026-02-02 10:54:12.927480946 +0000 UTC m=+1334.018882396" Feb 02 10:54:12 crc kubenswrapper[4845]: I0202 10:54:12.962452 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.962425312 podStartE2EDuration="3.962425312s" podCreationTimestamp="2026-02-02 10:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:12.951650372 +0000 UTC m=+1334.043051832" watchObservedRunningTime="2026-02-02 10:54:12.962425312 +0000 UTC m=+1334.053826762" Feb 02 10:54:13 crc kubenswrapper[4845]: I0202 10:54:13.155005 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:54:13 crc kubenswrapper[4845]: I0202 10:54:13.921837 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 10:54:14 crc kubenswrapper[4845]: I0202 10:54:14.871124 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.258987 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6dcccd9c6c-tq64l" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.372109 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d94ddcf58-g79zr"] Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.456858 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.456994 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.739123 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7998b4fc87-n5g2f" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.833856 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-655bdf96f4-zpj7r"] Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.881500 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.951494 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.972420 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.972689 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d94ddcf58-g79zr" event={"ID":"06e2175d-c446-4586-a3cb-e5819314abfe","Type":"ContainerDied","Data":"5e9e84a85d37d5a663e5d2555b2bd6af1b0b56601697215185d84529a118289f"} Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.972759 4845 scope.go:117] "RemoveContainer" containerID="2d3403ecc8aab1e7f1cb36779d4b45fbc1dacabb752fd06184d50a7990b505be" Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.977480 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-combined-ca-bundle\") pod \"06e2175d-c446-4586-a3cb-e5819314abfe\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.977649 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data\") pod \"06e2175d-c446-4586-a3cb-e5819314abfe\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.977687 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvzds\" (UniqueName: \"kubernetes.io/projected/06e2175d-c446-4586-a3cb-e5819314abfe-kube-api-access-xvzds\") pod \"06e2175d-c446-4586-a3cb-e5819314abfe\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.977813 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data-custom\") pod \"06e2175d-c446-4586-a3cb-e5819314abfe\" (UID: \"06e2175d-c446-4586-a3cb-e5819314abfe\") " Feb 02 10:54:15 crc kubenswrapper[4845]: I0202 10:54:15.998356 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "06e2175d-c446-4586-a3cb-e5819314abfe" (UID: "06e2175d-c446-4586-a3cb-e5819314abfe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.000547 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e2175d-c446-4586-a3cb-e5819314abfe-kube-api-access-xvzds" (OuterVolumeSpecName: "kube-api-access-xvzds") pod "06e2175d-c446-4586-a3cb-e5819314abfe" (UID: "06e2175d-c446-4586-a3cb-e5819314abfe"). InnerVolumeSpecName "kube-api-access-xvzds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.048248 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06e2175d-c446-4586-a3cb-e5819314abfe" (UID: "06e2175d-c446-4586-a3cb-e5819314abfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.075347 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data" (OuterVolumeSpecName: "config-data") pod "06e2175d-c446-4586-a3cb-e5819314abfe" (UID: "06e2175d-c446-4586-a3cb-e5819314abfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.084115 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.084161 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvzds\" (UniqueName: \"kubernetes.io/projected/06e2175d-c446-4586-a3cb-e5819314abfe-kube-api-access-xvzds\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.084172 4845 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.084183 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06e2175d-c446-4586-a3cb-e5819314abfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.238418 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.238491 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.238543 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.239742 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6667d6885fd474a5baafce195af3c9008051b075b4b764b236fc396ff08f675c"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.239796 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://6667d6885fd474a5baafce195af3c9008051b075b4b764b236fc396ff08f675c" gracePeriod=600 Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.289896 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.354980 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d94ddcf58-g79zr"] Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.377333 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6d94ddcf58-g79zr"] Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.389709 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data-custom\") pod \"1b30c63b-6407-4f1d-a393-c4f33a758db8\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.390382 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-combined-ca-bundle\") pod \"1b30c63b-6407-4f1d-a393-c4f33a758db8\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.390504 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data\") pod \"1b30c63b-6407-4f1d-a393-c4f33a758db8\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.390651 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8jn2\" (UniqueName: \"kubernetes.io/projected/1b30c63b-6407-4f1d-a393-c4f33a758db8-kube-api-access-j8jn2\") pod \"1b30c63b-6407-4f1d-a393-c4f33a758db8\" (UID: \"1b30c63b-6407-4f1d-a393-c4f33a758db8\") " Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.398269 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b30c63b-6407-4f1d-a393-c4f33a758db8-kube-api-access-j8jn2" (OuterVolumeSpecName: "kube-api-access-j8jn2") pod "1b30c63b-6407-4f1d-a393-c4f33a758db8" (UID: "1b30c63b-6407-4f1d-a393-c4f33a758db8"). InnerVolumeSpecName "kube-api-access-j8jn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.402184 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1b30c63b-6407-4f1d-a393-c4f33a758db8" (UID: "1b30c63b-6407-4f1d-a393-c4f33a758db8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.440772 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b30c63b-6407-4f1d-a393-c4f33a758db8" (UID: "1b30c63b-6407-4f1d-a393-c4f33a758db8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.486949 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data" (OuterVolumeSpecName: "config-data") pod "1b30c63b-6407-4f1d-a393-c4f33a758db8" (UID: "1b30c63b-6407-4f1d-a393-c4f33a758db8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.493696 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.493738 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8jn2\" (UniqueName: \"kubernetes.io/projected/1b30c63b-6407-4f1d-a393-c4f33a758db8-kube-api-access-j8jn2\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.493752 4845 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.493764 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b30c63b-6407-4f1d-a393-c4f33a758db8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.984409 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-655bdf96f4-zpj7r" event={"ID":"1b30c63b-6407-4f1d-a393-c4f33a758db8","Type":"ContainerDied","Data":"1958eba93e5442b126732bbc2709ccb6a5ad1ffec89971c483a8a27cda81d546"} Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.984660 4845 scope.go:117] "RemoveContainer" containerID="515ee1e3ff8694e116cc5e516b9da90abbcfc1e6bc587e3f4003c38c236374a8" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.984448 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-655bdf96f4-zpj7r" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.990609 4845 generic.go:334] "Generic (PLEG): container finished" podID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerID="65f19a5e53861afad5be0dc22811e405a7089640d2466a967d0205688cbae9c3" exitCode=0 Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.990664 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerDied","Data":"65f19a5e53861afad5be0dc22811e405a7089640d2466a967d0205688cbae9c3"} Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.990688 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53bef472-98d8-47d6-9601-fa9bc8438a5d","Type":"ContainerDied","Data":"3ed9e63dbb2ac681fa2a644a55a08157b6bb091be1c082086d50bfa19e192457"} Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.990699 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed9e63dbb2ac681fa2a644a55a08157b6bb091be1c082086d50bfa19e192457" Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.996168 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="6667d6885fd474a5baafce195af3c9008051b075b4b764b236fc396ff08f675c" exitCode=0 Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.996250 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"6667d6885fd474a5baafce195af3c9008051b075b4b764b236fc396ff08f675c"} Feb 02 10:54:16 crc kubenswrapper[4845]: I0202 10:54:16.996318 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020"} Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.050843 4845 scope.go:117] "RemoveContainer" containerID="b265cb3810e3935261baf8bbd2287ce4faf34ceae4eb09c4d8144e547b3debd5" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.098728 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.109720 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-log-httpd\") pod \"53bef472-98d8-47d6-9601-fa9bc8438a5d\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.109828 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-sg-core-conf-yaml\") pod \"53bef472-98d8-47d6-9601-fa9bc8438a5d\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.110008 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-run-httpd\") pod \"53bef472-98d8-47d6-9601-fa9bc8438a5d\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.110063 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bjdd\" (UniqueName: \"kubernetes.io/projected/53bef472-98d8-47d6-9601-fa9bc8438a5d-kube-api-access-6bjdd\") pod \"53bef472-98d8-47d6-9601-fa9bc8438a5d\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.110131 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-scripts\") pod \"53bef472-98d8-47d6-9601-fa9bc8438a5d\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.110234 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-config-data\") pod \"53bef472-98d8-47d6-9601-fa9bc8438a5d\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.110289 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-combined-ca-bundle\") pod \"53bef472-98d8-47d6-9601-fa9bc8438a5d\" (UID: \"53bef472-98d8-47d6-9601-fa9bc8438a5d\") " Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.110277 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "53bef472-98d8-47d6-9601-fa9bc8438a5d" (UID: "53bef472-98d8-47d6-9601-fa9bc8438a5d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.110558 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "53bef472-98d8-47d6-9601-fa9bc8438a5d" (UID: "53bef472-98d8-47d6-9601-fa9bc8438a5d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.111255 4845 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.111281 4845 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53bef472-98d8-47d6-9601-fa9bc8438a5d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.115089 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53bef472-98d8-47d6-9601-fa9bc8438a5d-kube-api-access-6bjdd" (OuterVolumeSpecName: "kube-api-access-6bjdd") pod "53bef472-98d8-47d6-9601-fa9bc8438a5d" (UID: "53bef472-98d8-47d6-9601-fa9bc8438a5d"). InnerVolumeSpecName "kube-api-access-6bjdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.118155 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-scripts" (OuterVolumeSpecName: "scripts") pod "53bef472-98d8-47d6-9601-fa9bc8438a5d" (UID: "53bef472-98d8-47d6-9601-fa9bc8438a5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.119717 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-655bdf96f4-zpj7r"] Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.135013 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-655bdf96f4-zpj7r"] Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.154201 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "53bef472-98d8-47d6-9601-fa9bc8438a5d" (UID: "53bef472-98d8-47d6-9601-fa9bc8438a5d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.215251 4845 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.215276 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bjdd\" (UniqueName: \"kubernetes.io/projected/53bef472-98d8-47d6-9601-fa9bc8438a5d-kube-api-access-6bjdd\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.215286 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.253024 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53bef472-98d8-47d6-9601-fa9bc8438a5d" (UID: "53bef472-98d8-47d6-9601-fa9bc8438a5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.257801 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-config-data" (OuterVolumeSpecName: "config-data") pod "53bef472-98d8-47d6-9601-fa9bc8438a5d" (UID: "53bef472-98d8-47d6-9601-fa9bc8438a5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.316834 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.316869 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bef472-98d8-47d6-9601-fa9bc8438a5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.758654 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" path="/var/lib/kubelet/pods/06e2175d-c446-4586-a3cb-e5819314abfe/volumes" Feb 02 10:54:17 crc kubenswrapper[4845]: I0202 10:54:17.759435 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" path="/var/lib/kubelet/pods/1b30c63b-6407-4f1d-a393-c4f33a758db8/volumes" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.021003 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.050151 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.069121 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.088695 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:18 crc kubenswrapper[4845]: E0202 10:54:18.089250 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="proxy-httpd" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089267 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="proxy-httpd" Feb 02 10:54:18 crc kubenswrapper[4845]: E0202 10:54:18.089289 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="sg-core" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089296 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="sg-core" Feb 02 10:54:18 crc kubenswrapper[4845]: E0202 10:54:18.089312 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="ceilometer-notification-agent" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089319 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="ceilometer-notification-agent" Feb 02 10:54:18 crc kubenswrapper[4845]: E0202 10:54:18.089328 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="ceilometer-central-agent" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089334 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="ceilometer-central-agent" Feb 02 10:54:18 crc kubenswrapper[4845]: E0202 10:54:18.089351 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" containerName="heat-cfnapi" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089357 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" containerName="heat-cfnapi" Feb 02 10:54:18 crc kubenswrapper[4845]: E0202 10:54:18.089370 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" containerName="heat-cfnapi" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089377 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" containerName="heat-cfnapi" Feb 02 10:54:18 crc kubenswrapper[4845]: E0202 10:54:18.089388 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" containerName="heat-api" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089393 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" containerName="heat-api" Feb 02 10:54:18 crc kubenswrapper[4845]: E0202 10:54:18.089419 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" containerName="heat-api" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089425 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" containerName="heat-api" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089626 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" containerName="heat-api" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089639 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="sg-core" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089652 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b30c63b-6407-4f1d-a393-c4f33a758db8" containerName="heat-api" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089665 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="ceilometer-notification-agent" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089677 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" containerName="heat-cfnapi" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089688 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="proxy-httpd" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089703 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e2175d-c446-4586-a3cb-e5819314abfe" containerName="heat-cfnapi" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.089713 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" containerName="ceilometer-central-agent" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.091737 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.093655 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.095230 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.102254 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.255690 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v49d\" (UniqueName: \"kubernetes.io/projected/baf4fb99-ffd9-4c16-b115-bbcb46c01096-kube-api-access-6v49d\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.255806 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-run-httpd\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.255919 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.255989 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-config-data\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.256115 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-log-httpd\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.256145 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-scripts\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.256163 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.360868 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-config-data\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.365320 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-log-httpd\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.365385 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-scripts\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.365414 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.365551 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v49d\" (UniqueName: \"kubernetes.io/projected/baf4fb99-ffd9-4c16-b115-bbcb46c01096-kube-api-access-6v49d\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.365598 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-run-httpd\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.365704 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.366334 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-run-httpd\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.366921 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-log-httpd\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.368462 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-config-data\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.369567 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.369774 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-scripts\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.374856 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.389412 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v49d\" (UniqueName: \"kubernetes.io/projected/baf4fb99-ffd9-4c16-b115-bbcb46c01096-kube-api-access-6v49d\") pod \"ceilometer-0\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.425191 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:18 crc kubenswrapper[4845]: I0202 10:54:18.938597 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:19 crc kubenswrapper[4845]: I0202 10:54:19.036708 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerStarted","Data":"2e307c045dfca6f5ca82d2578694038311524ca7900bb0337d06405f23ff2a24"} Feb 02 10:54:19 crc kubenswrapper[4845]: E0202 10:54:19.590299 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:19 crc kubenswrapper[4845]: I0202 10:54:19.733115 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53bef472-98d8-47d6-9601-fa9bc8438a5d" path="/var/lib/kubelet/pods/53bef472-98d8-47d6-9601-fa9bc8438a5d/volumes" Feb 02 10:54:19 crc kubenswrapper[4845]: I0202 10:54:19.968737 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:20 crc kubenswrapper[4845]: I0202 10:54:20.053245 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerStarted","Data":"41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448"} Feb 02 10:54:20 crc kubenswrapper[4845]: I0202 10:54:20.220098 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:20 crc kubenswrapper[4845]: I0202 10:54:20.220421 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:20 crc kubenswrapper[4845]: I0202 10:54:20.249347 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-658dbb4bcd-qn5fs" Feb 02 10:54:20 crc kubenswrapper[4845]: I0202 10:54:20.301544 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:20 crc kubenswrapper[4845]: I0202 10:54:20.303183 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:20 crc kubenswrapper[4845]: I0202 10:54:20.319553 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-567746f76f-zjfmt"] Feb 02 10:54:20 crc kubenswrapper[4845]: I0202 10:54:20.320005 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-567746f76f-zjfmt" podUID="30fbb4bb-1391-411d-adda-a41d223aed00" containerName="heat-engine" containerID="cri-o://307d8b0b23eff351361b7a613060b4a588e27fc82cd046b95b3491796e7afc93" gracePeriod=60 Feb 02 10:54:21 crc kubenswrapper[4845]: I0202 10:54:21.066783 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerStarted","Data":"135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb"} Feb 02 10:54:21 crc kubenswrapper[4845]: I0202 10:54:21.068079 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:21 crc kubenswrapper[4845]: I0202 10:54:21.068204 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:22 crc kubenswrapper[4845]: I0202 10:54:22.084040 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerStarted","Data":"dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef"} Feb 02 10:54:23 crc kubenswrapper[4845]: I0202 10:54:23.114351 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:54:23 crc kubenswrapper[4845]: I0202 10:54:23.115453 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:54:23 crc kubenswrapper[4845]: I0202 10:54:23.116717 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 10:54:23 crc kubenswrapper[4845]: E0202 10:54:23.118740 4845 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="307d8b0b23eff351361b7a613060b4a588e27fc82cd046b95b3491796e7afc93" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 10:54:23 crc kubenswrapper[4845]: E0202 10:54:23.123271 4845 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="307d8b0b23eff351361b7a613060b4a588e27fc82cd046b95b3491796e7afc93" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 10:54:23 crc kubenswrapper[4845]: E0202 10:54:23.125115 4845 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="307d8b0b23eff351361b7a613060b4a588e27fc82cd046b95b3491796e7afc93" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 10:54:23 crc kubenswrapper[4845]: E0202 10:54:23.125169 4845 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-567746f76f-zjfmt" podUID="30fbb4bb-1391-411d-adda-a41d223aed00" containerName="heat-engine" Feb 02 10:54:24 crc kubenswrapper[4845]: I0202 10:54:24.130805 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerStarted","Data":"9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c"} Feb 02 10:54:24 crc kubenswrapper[4845]: I0202 10:54:24.132341 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:54:24 crc kubenswrapper[4845]: I0202 10:54:24.131707 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="proxy-httpd" containerID="cri-o://9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c" gracePeriod=30 Feb 02 10:54:24 crc kubenswrapper[4845]: I0202 10:54:24.131033 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="ceilometer-central-agent" containerID="cri-o://41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448" gracePeriod=30 Feb 02 10:54:24 crc kubenswrapper[4845]: I0202 10:54:24.131740 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="ceilometer-notification-agent" containerID="cri-o://135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb" gracePeriod=30 Feb 02 10:54:24 crc kubenswrapper[4845]: I0202 10:54:24.131726 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="sg-core" containerID="cri-o://dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef" gracePeriod=30 Feb 02 10:54:24 crc kubenswrapper[4845]: I0202 10:54:24.175713 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.601790529 podStartE2EDuration="6.175687912s" podCreationTimestamp="2026-02-02 10:54:18 +0000 UTC" firstStartedPulling="2026-02-02 10:54:18.961453041 +0000 UTC m=+1340.052854491" lastFinishedPulling="2026-02-02 10:54:23.535350424 +0000 UTC m=+1344.626751874" observedRunningTime="2026-02-02 10:54:24.166557959 +0000 UTC m=+1345.257959409" watchObservedRunningTime="2026-02-02 10:54:24.175687912 +0000 UTC m=+1345.267089372" Feb 02 10:54:25 crc kubenswrapper[4845]: I0202 10:54:25.146774 4845 generic.go:334] "Generic (PLEG): container finished" podID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerID="dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef" exitCode=2 Feb 02 10:54:25 crc kubenswrapper[4845]: I0202 10:54:25.147147 4845 generic.go:334] "Generic (PLEG): container finished" podID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerID="135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb" exitCode=0 Feb 02 10:54:25 crc kubenswrapper[4845]: I0202 10:54:25.146842 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerDied","Data":"dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef"} Feb 02 10:54:25 crc kubenswrapper[4845]: I0202 10:54:25.147201 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerDied","Data":"135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb"} Feb 02 10:54:25 crc kubenswrapper[4845]: E0202 10:54:25.432416 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:25 crc kubenswrapper[4845]: I0202 10:54:25.563619 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:25 crc kubenswrapper[4845]: I0202 10:54:25.564047 4845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:54:25 crc kubenswrapper[4845]: I0202 10:54:25.566445 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.100971 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p6lbd"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.103368 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.116872 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p6lbd"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.197860 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36b758e3-acc2-451a-b64d-9c53a7e5f98f-operator-scripts\") pod \"nova-api-db-create-p6lbd\" (UID: \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\") " pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.197988 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2kw8\" (UniqueName: \"kubernetes.io/projected/36b758e3-acc2-451a-b64d-9c53a7e5f98f-kube-api-access-q2kw8\") pod \"nova-api-db-create-p6lbd\" (UID: \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\") " pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.213411 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8fed-account-create-update-p76r4"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.214913 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.216944 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.224266 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8fed-account-create-update-p76r4"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.301051 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621aa5b7-f496-48f4-a72d-74e8886f813e-operator-scripts\") pod \"nova-api-8fed-account-create-update-p76r4\" (UID: \"621aa5b7-f496-48f4-a72d-74e8886f813e\") " pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.301138 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36b758e3-acc2-451a-b64d-9c53a7e5f98f-operator-scripts\") pod \"nova-api-db-create-p6lbd\" (UID: \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\") " pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.301255 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2kw8\" (UniqueName: \"kubernetes.io/projected/36b758e3-acc2-451a-b64d-9c53a7e5f98f-kube-api-access-q2kw8\") pod \"nova-api-db-create-p6lbd\" (UID: \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\") " pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.301335 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58s4c\" (UniqueName: \"kubernetes.io/projected/621aa5b7-f496-48f4-a72d-74e8886f813e-kube-api-access-58s4c\") pod \"nova-api-8fed-account-create-update-p76r4\" (UID: \"621aa5b7-f496-48f4-a72d-74e8886f813e\") " pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.301987 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36b758e3-acc2-451a-b64d-9c53a7e5f98f-operator-scripts\") pod \"nova-api-db-create-p6lbd\" (UID: \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\") " pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.308784 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-69t2n"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.310727 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.324452 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-69t2n"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.333692 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2kw8\" (UniqueName: \"kubernetes.io/projected/36b758e3-acc2-451a-b64d-9c53a7e5f98f-kube-api-access-q2kw8\") pod \"nova-api-db-create-p6lbd\" (UID: \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\") " pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.409441 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621aa5b7-f496-48f4-a72d-74e8886f813e-operator-scripts\") pod \"nova-api-8fed-account-create-update-p76r4\" (UID: \"621aa5b7-f496-48f4-a72d-74e8886f813e\") " pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.409533 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65acb40f-b003-4d37-93c0-4198beba28ed-operator-scripts\") pod \"nova-cell0-db-create-69t2n\" (UID: \"65acb40f-b003-4d37-93c0-4198beba28ed\") " pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.409634 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rsvt\" (UniqueName: \"kubernetes.io/projected/65acb40f-b003-4d37-93c0-4198beba28ed-kube-api-access-5rsvt\") pod \"nova-cell0-db-create-69t2n\" (UID: \"65acb40f-b003-4d37-93c0-4198beba28ed\") " pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.409688 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58s4c\" (UniqueName: \"kubernetes.io/projected/621aa5b7-f496-48f4-a72d-74e8886f813e-kube-api-access-58s4c\") pod \"nova-api-8fed-account-create-update-p76r4\" (UID: \"621aa5b7-f496-48f4-a72d-74e8886f813e\") " pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.410955 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621aa5b7-f496-48f4-a72d-74e8886f813e-operator-scripts\") pod \"nova-api-8fed-account-create-update-p76r4\" (UID: \"621aa5b7-f496-48f4-a72d-74e8886f813e\") " pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.431938 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vkhkq"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.433946 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.434586 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58s4c\" (UniqueName: \"kubernetes.io/projected/621aa5b7-f496-48f4-a72d-74e8886f813e-kube-api-access-58s4c\") pod \"nova-api-8fed-account-create-update-p76r4\" (UID: \"621aa5b7-f496-48f4-a72d-74e8886f813e\") " pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.436223 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.449925 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-dfd5-account-create-update-9mdwn"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.451451 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.456645 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.471944 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vkhkq"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.503938 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-dfd5-account-create-update-9mdwn"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.513094 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rsvt\" (UniqueName: \"kubernetes.io/projected/65acb40f-b003-4d37-93c0-4198beba28ed-kube-api-access-5rsvt\") pod \"nova-cell0-db-create-69t2n\" (UID: \"65acb40f-b003-4d37-93c0-4198beba28ed\") " pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.513145 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn6m2\" (UniqueName: \"kubernetes.io/projected/b9ca8c7e-f45d-4014-9599-2ba08495811f-kube-api-access-wn6m2\") pod \"nova-cell1-db-create-vkhkq\" (UID: \"b9ca8c7e-f45d-4014-9599-2ba08495811f\") " pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.513317 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ca8c7e-f45d-4014-9599-2ba08495811f-operator-scripts\") pod \"nova-cell1-db-create-vkhkq\" (UID: \"b9ca8c7e-f45d-4014-9599-2ba08495811f\") " pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.513439 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65acb40f-b003-4d37-93c0-4198beba28ed-operator-scripts\") pod \"nova-cell0-db-create-69t2n\" (UID: \"65acb40f-b003-4d37-93c0-4198beba28ed\") " pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.514356 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65acb40f-b003-4d37-93c0-4198beba28ed-operator-scripts\") pod \"nova-cell0-db-create-69t2n\" (UID: \"65acb40f-b003-4d37-93c0-4198beba28ed\") " pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.530679 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.543689 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rsvt\" (UniqueName: \"kubernetes.io/projected/65acb40f-b003-4d37-93c0-4198beba28ed-kube-api-access-5rsvt\") pod \"nova-cell0-db-create-69t2n\" (UID: \"65acb40f-b003-4d37-93c0-4198beba28ed\") " pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.615196 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn6m2\" (UniqueName: \"kubernetes.io/projected/b9ca8c7e-f45d-4014-9599-2ba08495811f-kube-api-access-wn6m2\") pod \"nova-cell1-db-create-vkhkq\" (UID: \"b9ca8c7e-f45d-4014-9599-2ba08495811f\") " pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.615324 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ca8c7e-f45d-4014-9599-2ba08495811f-operator-scripts\") pod \"nova-cell1-db-create-vkhkq\" (UID: \"b9ca8c7e-f45d-4014-9599-2ba08495811f\") " pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.615393 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nxqp\" (UniqueName: \"kubernetes.io/projected/93e02369-64e6-46f8-a84d-f50396230784-kube-api-access-8nxqp\") pod \"nova-cell0-dfd5-account-create-update-9mdwn\" (UID: \"93e02369-64e6-46f8-a84d-f50396230784\") " pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.615420 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e02369-64e6-46f8-a84d-f50396230784-operator-scripts\") pod \"nova-cell0-dfd5-account-create-update-9mdwn\" (UID: \"93e02369-64e6-46f8-a84d-f50396230784\") " pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.616596 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ca8c7e-f45d-4014-9599-2ba08495811f-operator-scripts\") pod \"nova-cell1-db-create-vkhkq\" (UID: \"b9ca8c7e-f45d-4014-9599-2ba08495811f\") " pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.629337 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4ab2-account-create-update-l992n"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.633453 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.639309 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.642267 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn6m2\" (UniqueName: \"kubernetes.io/projected/b9ca8c7e-f45d-4014-9599-2ba08495811f-kube-api-access-wn6m2\") pod \"nova-cell1-db-create-vkhkq\" (UID: \"b9ca8c7e-f45d-4014-9599-2ba08495811f\") " pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.644217 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.645040 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ab2-account-create-update-l992n"] Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.699057 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.723859 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmvl\" (UniqueName: \"kubernetes.io/projected/08a7f3c4-2a4a-4d07-91ee-27a63961c272-kube-api-access-cgmvl\") pod \"nova-cell1-4ab2-account-create-update-l992n\" (UID: \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\") " pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.726122 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a7f3c4-2a4a-4d07-91ee-27a63961c272-operator-scripts\") pod \"nova-cell1-4ab2-account-create-update-l992n\" (UID: \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\") " pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.726289 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nxqp\" (UniqueName: \"kubernetes.io/projected/93e02369-64e6-46f8-a84d-f50396230784-kube-api-access-8nxqp\") pod \"nova-cell0-dfd5-account-create-update-9mdwn\" (UID: \"93e02369-64e6-46f8-a84d-f50396230784\") " pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.726354 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e02369-64e6-46f8-a84d-f50396230784-operator-scripts\") pod \"nova-cell0-dfd5-account-create-update-9mdwn\" (UID: \"93e02369-64e6-46f8-a84d-f50396230784\") " pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.729977 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e02369-64e6-46f8-a84d-f50396230784-operator-scripts\") pod \"nova-cell0-dfd5-account-create-update-9mdwn\" (UID: \"93e02369-64e6-46f8-a84d-f50396230784\") " pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.769040 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nxqp\" (UniqueName: \"kubernetes.io/projected/93e02369-64e6-46f8-a84d-f50396230784-kube-api-access-8nxqp\") pod \"nova-cell0-dfd5-account-create-update-9mdwn\" (UID: \"93e02369-64e6-46f8-a84d-f50396230784\") " pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.839649 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmvl\" (UniqueName: \"kubernetes.io/projected/08a7f3c4-2a4a-4d07-91ee-27a63961c272-kube-api-access-cgmvl\") pod \"nova-cell1-4ab2-account-create-update-l992n\" (UID: \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\") " pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.839832 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a7f3c4-2a4a-4d07-91ee-27a63961c272-operator-scripts\") pod \"nova-cell1-4ab2-account-create-update-l992n\" (UID: \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\") " pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.847621 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a7f3c4-2a4a-4d07-91ee-27a63961c272-operator-scripts\") pod \"nova-cell1-4ab2-account-create-update-l992n\" (UID: \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\") " pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:27 crc kubenswrapper[4845]: I0202 10:54:27.874007 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmvl\" (UniqueName: \"kubernetes.io/projected/08a7f3c4-2a4a-4d07-91ee-27a63961c272-kube-api-access-cgmvl\") pod \"nova-cell1-4ab2-account-create-update-l992n\" (UID: \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\") " pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:28 crc kubenswrapper[4845]: I0202 10:54:28.026938 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:28 crc kubenswrapper[4845]: I0202 10:54:28.056170 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:28 crc kubenswrapper[4845]: I0202 10:54:28.295789 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p6lbd"] Feb 02 10:54:28 crc kubenswrapper[4845]: W0202 10:54:28.298273 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b758e3_acc2_451a_b64d_9c53a7e5f98f.slice/crio-a2dc59b71d4e75e917f735576836e82d191a7123b82395f183042e4e8602897b WatchSource:0}: Error finding container a2dc59b71d4e75e917f735576836e82d191a7123b82395f183042e4e8602897b: Status 404 returned error can't find the container with id a2dc59b71d4e75e917f735576836e82d191a7123b82395f183042e4e8602897b Feb 02 10:54:28 crc kubenswrapper[4845]: I0202 10:54:28.381817 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8fed-account-create-update-p76r4"] Feb 02 10:54:28 crc kubenswrapper[4845]: I0202 10:54:28.737460 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-69t2n"] Feb 02 10:54:28 crc kubenswrapper[4845]: I0202 10:54:28.782791 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vkhkq"] Feb 02 10:54:28 crc kubenswrapper[4845]: I0202 10:54:28.972632 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ab2-account-create-update-l992n"] Feb 02 10:54:28 crc kubenswrapper[4845]: I0202 10:54:28.998758 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-dfd5-account-create-update-9mdwn"] Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.282722 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" event={"ID":"93e02369-64e6-46f8-a84d-f50396230784","Type":"ContainerStarted","Data":"def3159dca0d08156117550efff6e96fe43dd7fc3991f81ebd27a9fa9cd35f91"} Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.286637 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ab2-account-create-update-l992n" event={"ID":"08a7f3c4-2a4a-4d07-91ee-27a63961c272","Type":"ContainerStarted","Data":"1ac2b596634ce8baec011403f3175913f50d5fac461558bf9df2981fd732ceaf"} Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.309306 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vkhkq" event={"ID":"b9ca8c7e-f45d-4014-9599-2ba08495811f","Type":"ContainerStarted","Data":"c959dbd0617786458da203ebfec1f5f7d7cbfed06b8a763a051d64aecc2aaf06"} Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.316642 4845 generic.go:334] "Generic (PLEG): container finished" podID="621aa5b7-f496-48f4-a72d-74e8886f813e" containerID="8360a8bbf698c5c922fa9756941c2a4df903063c7069ac543b4e17f4e1f546d5" exitCode=0 Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.316774 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8fed-account-create-update-p76r4" event={"ID":"621aa5b7-f496-48f4-a72d-74e8886f813e","Type":"ContainerDied","Data":"8360a8bbf698c5c922fa9756941c2a4df903063c7069ac543b4e17f4e1f546d5"} Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.316812 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8fed-account-create-update-p76r4" event={"ID":"621aa5b7-f496-48f4-a72d-74e8886f813e","Type":"ContainerStarted","Data":"f76b13f78cfbc517da005a059719e5fa9f1bfa3650b860b4c36933580a57ddff"} Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.322443 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-69t2n" event={"ID":"65acb40f-b003-4d37-93c0-4198beba28ed","Type":"ContainerStarted","Data":"5f33208b080e0c59e43af57ad0c35b7e52589ea9537af21b95c6b74465223db0"} Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.329659 4845 generic.go:334] "Generic (PLEG): container finished" podID="36b758e3-acc2-451a-b64d-9c53a7e5f98f" containerID="d9f965fd9f9dbd88dc973b197a2f3c57c9e0f963db241ff90038d5e96106751f" exitCode=0 Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.329721 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p6lbd" event={"ID":"36b758e3-acc2-451a-b64d-9c53a7e5f98f","Type":"ContainerDied","Data":"d9f965fd9f9dbd88dc973b197a2f3c57c9e0f963db241ff90038d5e96106751f"} Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.329747 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p6lbd" event={"ID":"36b758e3-acc2-451a-b64d-9c53a7e5f98f","Type":"ContainerStarted","Data":"a2dc59b71d4e75e917f735576836e82d191a7123b82395f183042e4e8602897b"} Feb 02 10:54:29 crc kubenswrapper[4845]: I0202 10:54:29.400040 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-69t2n" podStartSLOduration=2.400016094 podStartE2EDuration="2.400016094s" podCreationTimestamp="2026-02-02 10:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:54:29.383698914 +0000 UTC m=+1350.475100364" watchObservedRunningTime="2026-02-02 10:54:29.400016094 +0000 UTC m=+1350.491417544" Feb 02 10:54:29 crc kubenswrapper[4845]: E0202 10:54:29.676319 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.373394 4845 generic.go:334] "Generic (PLEG): container finished" podID="93e02369-64e6-46f8-a84d-f50396230784" containerID="74d2907667c2c914fc57daa5fed9146112b93f8bf9f401e92958c5811e8dc6f3" exitCode=0 Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.373721 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" event={"ID":"93e02369-64e6-46f8-a84d-f50396230784","Type":"ContainerDied","Data":"74d2907667c2c914fc57daa5fed9146112b93f8bf9f401e92958c5811e8dc6f3"} Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.378422 4845 generic.go:334] "Generic (PLEG): container finished" podID="08a7f3c4-2a4a-4d07-91ee-27a63961c272" containerID="6019d47869688d15147a8932c0b84dab21fa14445dd8f012a8b145c6bed8a74d" exitCode=0 Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.378490 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ab2-account-create-update-l992n" event={"ID":"08a7f3c4-2a4a-4d07-91ee-27a63961c272","Type":"ContainerDied","Data":"6019d47869688d15147a8932c0b84dab21fa14445dd8f012a8b145c6bed8a74d"} Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.384491 4845 generic.go:334] "Generic (PLEG): container finished" podID="30fbb4bb-1391-411d-adda-a41d223aed00" containerID="307d8b0b23eff351361b7a613060b4a588e27fc82cd046b95b3491796e7afc93" exitCode=0 Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.384572 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-567746f76f-zjfmt" event={"ID":"30fbb4bb-1391-411d-adda-a41d223aed00","Type":"ContainerDied","Data":"307d8b0b23eff351361b7a613060b4a588e27fc82cd046b95b3491796e7afc93"} Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.394756 4845 generic.go:334] "Generic (PLEG): container finished" podID="b9ca8c7e-f45d-4014-9599-2ba08495811f" containerID="b7177510879626b8e93e3bec07d3378fb19c2869cf116cbd055fdaad370ef7da" exitCode=0 Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.395198 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vkhkq" event={"ID":"b9ca8c7e-f45d-4014-9599-2ba08495811f","Type":"ContainerDied","Data":"b7177510879626b8e93e3bec07d3378fb19c2869cf116cbd055fdaad370ef7da"} Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.399132 4845 generic.go:334] "Generic (PLEG): container finished" podID="65acb40f-b003-4d37-93c0-4198beba28ed" containerID="8ac663309bf94373ab6aee2bd864d56af41199dceccdc883fa0aff4b2aa502f8" exitCode=0 Feb 02 10:54:30 crc kubenswrapper[4845]: I0202 10:54:30.399591 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-69t2n" event={"ID":"65acb40f-b003-4d37-93c0-4198beba28ed","Type":"ContainerDied","Data":"8ac663309bf94373ab6aee2bd864d56af41199dceccdc883fa0aff4b2aa502f8"} Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.137791 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.147270 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.160767 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.267668 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data\") pod \"30fbb4bb-1391-411d-adda-a41d223aed00\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.267714 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-combined-ca-bundle\") pod \"30fbb4bb-1391-411d-adda-a41d223aed00\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.267745 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf6hs\" (UniqueName: \"kubernetes.io/projected/30fbb4bb-1391-411d-adda-a41d223aed00-kube-api-access-cf6hs\") pod \"30fbb4bb-1391-411d-adda-a41d223aed00\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.268942 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621aa5b7-f496-48f4-a72d-74e8886f813e-operator-scripts\") pod \"621aa5b7-f496-48f4-a72d-74e8886f813e\" (UID: \"621aa5b7-f496-48f4-a72d-74e8886f813e\") " Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.269100 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58s4c\" (UniqueName: \"kubernetes.io/projected/621aa5b7-f496-48f4-a72d-74e8886f813e-kube-api-access-58s4c\") pod \"621aa5b7-f496-48f4-a72d-74e8886f813e\" (UID: \"621aa5b7-f496-48f4-a72d-74e8886f813e\") " Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.269159 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36b758e3-acc2-451a-b64d-9c53a7e5f98f-operator-scripts\") pod \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\" (UID: \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\") " Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.269247 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data-custom\") pod \"30fbb4bb-1391-411d-adda-a41d223aed00\" (UID: \"30fbb4bb-1391-411d-adda-a41d223aed00\") " Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.269339 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2kw8\" (UniqueName: \"kubernetes.io/projected/36b758e3-acc2-451a-b64d-9c53a7e5f98f-kube-api-access-q2kw8\") pod \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\" (UID: \"36b758e3-acc2-451a-b64d-9c53a7e5f98f\") " Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.269697 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621aa5b7-f496-48f4-a72d-74e8886f813e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "621aa5b7-f496-48f4-a72d-74e8886f813e" (UID: "621aa5b7-f496-48f4-a72d-74e8886f813e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.270172 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621aa5b7-f496-48f4-a72d-74e8886f813e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.270318 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b758e3-acc2-451a-b64d-9c53a7e5f98f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36b758e3-acc2-451a-b64d-9c53a7e5f98f" (UID: "36b758e3-acc2-451a-b64d-9c53a7e5f98f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.279824 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621aa5b7-f496-48f4-a72d-74e8886f813e-kube-api-access-58s4c" (OuterVolumeSpecName: "kube-api-access-58s4c") pod "621aa5b7-f496-48f4-a72d-74e8886f813e" (UID: "621aa5b7-f496-48f4-a72d-74e8886f813e"). InnerVolumeSpecName "kube-api-access-58s4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.280334 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30fbb4bb-1391-411d-adda-a41d223aed00-kube-api-access-cf6hs" (OuterVolumeSpecName: "kube-api-access-cf6hs") pod "30fbb4bb-1391-411d-adda-a41d223aed00" (UID: "30fbb4bb-1391-411d-adda-a41d223aed00"). InnerVolumeSpecName "kube-api-access-cf6hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.282229 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "30fbb4bb-1391-411d-adda-a41d223aed00" (UID: "30fbb4bb-1391-411d-adda-a41d223aed00"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.300860 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b758e3-acc2-451a-b64d-9c53a7e5f98f-kube-api-access-q2kw8" (OuterVolumeSpecName: "kube-api-access-q2kw8") pod "36b758e3-acc2-451a-b64d-9c53a7e5f98f" (UID: "36b758e3-acc2-451a-b64d-9c53a7e5f98f"). InnerVolumeSpecName "kube-api-access-q2kw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.313343 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30fbb4bb-1391-411d-adda-a41d223aed00" (UID: "30fbb4bb-1391-411d-adda-a41d223aed00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.361352 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data" (OuterVolumeSpecName: "config-data") pod "30fbb4bb-1391-411d-adda-a41d223aed00" (UID: "30fbb4bb-1391-411d-adda-a41d223aed00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.375730 4845 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.375777 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2kw8\" (UniqueName: \"kubernetes.io/projected/36b758e3-acc2-451a-b64d-9c53a7e5f98f-kube-api-access-q2kw8\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.375795 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.375808 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fbb4bb-1391-411d-adda-a41d223aed00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.375821 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf6hs\" (UniqueName: \"kubernetes.io/projected/30fbb4bb-1391-411d-adda-a41d223aed00-kube-api-access-cf6hs\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.375833 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58s4c\" (UniqueName: \"kubernetes.io/projected/621aa5b7-f496-48f4-a72d-74e8886f813e-kube-api-access-58s4c\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.375846 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36b758e3-acc2-451a-b64d-9c53a7e5f98f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.451699 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p6lbd" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.451740 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p6lbd" event={"ID":"36b758e3-acc2-451a-b64d-9c53a7e5f98f","Type":"ContainerDied","Data":"a2dc59b71d4e75e917f735576836e82d191a7123b82395f183042e4e8602897b"} Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.451789 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2dc59b71d4e75e917f735576836e82d191a7123b82395f183042e4e8602897b" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.459614 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-567746f76f-zjfmt" event={"ID":"30fbb4bb-1391-411d-adda-a41d223aed00","Type":"ContainerDied","Data":"2ed051f1edd72eac12419e5ea83d9cdd76f867860395bc32f8a07b0d289f477d"} Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.459689 4845 scope.go:117] "RemoveContainer" containerID="307d8b0b23eff351361b7a613060b4a588e27fc82cd046b95b3491796e7afc93" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.459728 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-567746f76f-zjfmt" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.465878 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8fed-account-create-update-p76r4" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.467478 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8fed-account-create-update-p76r4" event={"ID":"621aa5b7-f496-48f4-a72d-74e8886f813e","Type":"ContainerDied","Data":"f76b13f78cfbc517da005a059719e5fa9f1bfa3650b860b4c36933580a57ddff"} Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.467544 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f76b13f78cfbc517da005a059719e5fa9f1bfa3650b860b4c36933580a57ddff" Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.578935 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-567746f76f-zjfmt"] Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.591044 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-567746f76f-zjfmt"] Feb 02 10:54:31 crc kubenswrapper[4845]: I0202 10:54:31.755561 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30fbb4bb-1391-411d-adda-a41d223aed00" path="/var/lib/kubelet/pods/30fbb4bb-1391-411d-adda-a41d223aed00/volumes" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.005197 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.096327 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a7f3c4-2a4a-4d07-91ee-27a63961c272-operator-scripts\") pod \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\" (UID: \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\") " Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.096434 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgmvl\" (UniqueName: \"kubernetes.io/projected/08a7f3c4-2a4a-4d07-91ee-27a63961c272-kube-api-access-cgmvl\") pod \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\" (UID: \"08a7f3c4-2a4a-4d07-91ee-27a63961c272\") " Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.099558 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08a7f3c4-2a4a-4d07-91ee-27a63961c272-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08a7f3c4-2a4a-4d07-91ee-27a63961c272" (UID: "08a7f3c4-2a4a-4d07-91ee-27a63961c272"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.131457 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a7f3c4-2a4a-4d07-91ee-27a63961c272-kube-api-access-cgmvl" (OuterVolumeSpecName: "kube-api-access-cgmvl") pod "08a7f3c4-2a4a-4d07-91ee-27a63961c272" (UID: "08a7f3c4-2a4a-4d07-91ee-27a63961c272"). InnerVolumeSpecName "kube-api-access-cgmvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.188220 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.200696 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a7f3c4-2a4a-4d07-91ee-27a63961c272-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.200735 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgmvl\" (UniqueName: \"kubernetes.io/projected/08a7f3c4-2a4a-4d07-91ee-27a63961c272-kube-api-access-cgmvl\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.201954 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.210781 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.301728 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nxqp\" (UniqueName: \"kubernetes.io/projected/93e02369-64e6-46f8-a84d-f50396230784-kube-api-access-8nxqp\") pod \"93e02369-64e6-46f8-a84d-f50396230784\" (UID: \"93e02369-64e6-46f8-a84d-f50396230784\") " Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.301823 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e02369-64e6-46f8-a84d-f50396230784-operator-scripts\") pod \"93e02369-64e6-46f8-a84d-f50396230784\" (UID: \"93e02369-64e6-46f8-a84d-f50396230784\") " Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.301946 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65acb40f-b003-4d37-93c0-4198beba28ed-operator-scripts\") pod \"65acb40f-b003-4d37-93c0-4198beba28ed\" (UID: \"65acb40f-b003-4d37-93c0-4198beba28ed\") " Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.302041 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn6m2\" (UniqueName: \"kubernetes.io/projected/b9ca8c7e-f45d-4014-9599-2ba08495811f-kube-api-access-wn6m2\") pod \"b9ca8c7e-f45d-4014-9599-2ba08495811f\" (UID: \"b9ca8c7e-f45d-4014-9599-2ba08495811f\") " Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.302133 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ca8c7e-f45d-4014-9599-2ba08495811f-operator-scripts\") pod \"b9ca8c7e-f45d-4014-9599-2ba08495811f\" (UID: \"b9ca8c7e-f45d-4014-9599-2ba08495811f\") " Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.302247 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rsvt\" (UniqueName: \"kubernetes.io/projected/65acb40f-b003-4d37-93c0-4198beba28ed-kube-api-access-5rsvt\") pod \"65acb40f-b003-4d37-93c0-4198beba28ed\" (UID: \"65acb40f-b003-4d37-93c0-4198beba28ed\") " Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.303638 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65acb40f-b003-4d37-93c0-4198beba28ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65acb40f-b003-4d37-93c0-4198beba28ed" (UID: "65acb40f-b003-4d37-93c0-4198beba28ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.306814 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e02369-64e6-46f8-a84d-f50396230784-kube-api-access-8nxqp" (OuterVolumeSpecName: "kube-api-access-8nxqp") pod "93e02369-64e6-46f8-a84d-f50396230784" (UID: "93e02369-64e6-46f8-a84d-f50396230784"). InnerVolumeSpecName "kube-api-access-8nxqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.307231 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e02369-64e6-46f8-a84d-f50396230784-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93e02369-64e6-46f8-a84d-f50396230784" (UID: "93e02369-64e6-46f8-a84d-f50396230784"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.309569 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ca8c7e-f45d-4014-9599-2ba08495811f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9ca8c7e-f45d-4014-9599-2ba08495811f" (UID: "b9ca8c7e-f45d-4014-9599-2ba08495811f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.311468 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ca8c7e-f45d-4014-9599-2ba08495811f-kube-api-access-wn6m2" (OuterVolumeSpecName: "kube-api-access-wn6m2") pod "b9ca8c7e-f45d-4014-9599-2ba08495811f" (UID: "b9ca8c7e-f45d-4014-9599-2ba08495811f"). InnerVolumeSpecName "kube-api-access-wn6m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.331118 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65acb40f-b003-4d37-93c0-4198beba28ed-kube-api-access-5rsvt" (OuterVolumeSpecName: "kube-api-access-5rsvt") pod "65acb40f-b003-4d37-93c0-4198beba28ed" (UID: "65acb40f-b003-4d37-93c0-4198beba28ed"). InnerVolumeSpecName "kube-api-access-5rsvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.406752 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ca8c7e-f45d-4014-9599-2ba08495811f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.406787 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rsvt\" (UniqueName: \"kubernetes.io/projected/65acb40f-b003-4d37-93c0-4198beba28ed-kube-api-access-5rsvt\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.406798 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nxqp\" (UniqueName: \"kubernetes.io/projected/93e02369-64e6-46f8-a84d-f50396230784-kube-api-access-8nxqp\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.406807 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93e02369-64e6-46f8-a84d-f50396230784-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.406816 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65acb40f-b003-4d37-93c0-4198beba28ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.406824 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn6m2\" (UniqueName: \"kubernetes.io/projected/b9ca8c7e-f45d-4014-9599-2ba08495811f-kube-api-access-wn6m2\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.478812 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" event={"ID":"93e02369-64e6-46f8-a84d-f50396230784","Type":"ContainerDied","Data":"def3159dca0d08156117550efff6e96fe43dd7fc3991f81ebd27a9fa9cd35f91"} Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.478901 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def3159dca0d08156117550efff6e96fe43dd7fc3991f81ebd27a9fa9cd35f91" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.478837 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dfd5-account-create-update-9mdwn" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.480359 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ab2-account-create-update-l992n" event={"ID":"08a7f3c4-2a4a-4d07-91ee-27a63961c272","Type":"ContainerDied","Data":"1ac2b596634ce8baec011403f3175913f50d5fac461558bf9df2981fd732ceaf"} Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.480401 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac2b596634ce8baec011403f3175913f50d5fac461558bf9df2981fd732ceaf" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.480451 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ab2-account-create-update-l992n" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.495679 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vkhkq" event={"ID":"b9ca8c7e-f45d-4014-9599-2ba08495811f","Type":"ContainerDied","Data":"c959dbd0617786458da203ebfec1f5f7d7cbfed06b8a763a051d64aecc2aaf06"} Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.495723 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c959dbd0617786458da203ebfec1f5f7d7cbfed06b8a763a051d64aecc2aaf06" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.495792 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vkhkq" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.498875 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-69t2n" event={"ID":"65acb40f-b003-4d37-93c0-4198beba28ed","Type":"ContainerDied","Data":"5f33208b080e0c59e43af57ad0c35b7e52589ea9537af21b95c6b74465223db0"} Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.498953 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f33208b080e0c59e43af57ad0c35b7e52589ea9537af21b95c6b74465223db0" Feb 02 10:54:32 crc kubenswrapper[4845]: I0202 10:54:32.499018 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-69t2n" Feb 02 10:54:34 crc kubenswrapper[4845]: I0202 10:54:34.528429 4845 generic.go:334] "Generic (PLEG): container finished" podID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerID="41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448" exitCode=0 Feb 02 10:54:34 crc kubenswrapper[4845]: I0202 10:54:34.528540 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerDied","Data":"41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448"} Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.752172 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qwpzq"] Feb 02 10:54:37 crc kubenswrapper[4845]: E0202 10:54:37.753095 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ca8c7e-f45d-4014-9599-2ba08495811f" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753115 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ca8c7e-f45d-4014-9599-2ba08495811f" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: E0202 10:54:37.753141 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b758e3-acc2-451a-b64d-9c53a7e5f98f" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753149 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b758e3-acc2-451a-b64d-9c53a7e5f98f" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: E0202 10:54:37.753163 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621aa5b7-f496-48f4-a72d-74e8886f813e" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753170 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="621aa5b7-f496-48f4-a72d-74e8886f813e" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: E0202 10:54:37.753184 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e02369-64e6-46f8-a84d-f50396230784" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753192 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e02369-64e6-46f8-a84d-f50396230784" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: E0202 10:54:37.753204 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65acb40f-b003-4d37-93c0-4198beba28ed" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753211 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="65acb40f-b003-4d37-93c0-4198beba28ed" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: E0202 10:54:37.753221 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a7f3c4-2a4a-4d07-91ee-27a63961c272" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753228 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a7f3c4-2a4a-4d07-91ee-27a63961c272" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: E0202 10:54:37.753253 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fbb4bb-1391-411d-adda-a41d223aed00" containerName="heat-engine" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753260 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fbb4bb-1391-411d-adda-a41d223aed00" containerName="heat-engine" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753537 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b758e3-acc2-451a-b64d-9c53a7e5f98f" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753557 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a7f3c4-2a4a-4d07-91ee-27a63961c272" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753574 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ca8c7e-f45d-4014-9599-2ba08495811f" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753594 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e02369-64e6-46f8-a84d-f50396230784" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753603 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="621aa5b7-f496-48f4-a72d-74e8886f813e" containerName="mariadb-account-create-update" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753619 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="65acb40f-b003-4d37-93c0-4198beba28ed" containerName="mariadb-database-create" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.753634 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="30fbb4bb-1391-411d-adda-a41d223aed00" containerName="heat-engine" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.754597 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.760621 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.761074 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8lh6g" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.761370 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.795425 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qwpzq"] Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.842377 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-config-data\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.842523 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkklk\" (UniqueName: \"kubernetes.io/projected/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-kube-api-access-fkklk\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.842601 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.842630 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-scripts\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.945236 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-config-data\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.945360 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkklk\" (UniqueName: \"kubernetes.io/projected/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-kube-api-access-fkklk\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.945427 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.945460 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-scripts\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.957776 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-scripts\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.957934 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-config-data\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.957960 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:37 crc kubenswrapper[4845]: I0202 10:54:37.966896 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkklk\" (UniqueName: \"kubernetes.io/projected/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-kube-api-access-fkklk\") pod \"nova-cell0-conductor-db-sync-qwpzq\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:38 crc kubenswrapper[4845]: I0202 10:54:38.078553 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:54:38 crc kubenswrapper[4845]: I0202 10:54:38.564805 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qwpzq"] Feb 02 10:54:38 crc kubenswrapper[4845]: I0202 10:54:38.599903 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" event={"ID":"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff","Type":"ContainerStarted","Data":"7649afe14ec5ab0dbef279c4cc985147cdb3a346b125d477468bb7cabfb65001"} Feb 02 10:54:40 crc kubenswrapper[4845]: E0202 10:54:40.071534 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:40 crc kubenswrapper[4845]: E0202 10:54:40.183617 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:48 crc kubenswrapper[4845]: E0202 10:54:48.242369 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:48 crc kubenswrapper[4845]: E0202 10:54:48.244742 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:48 crc kubenswrapper[4845]: I0202 10:54:48.445947 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.244254 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-mjh66"] Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.251299 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.262828 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-mjh66"] Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.285763 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0d03-account-create-update-79bpm"] Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.292455 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.294758 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.308313 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0d03-account-create-update-79bpm"] Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.463746 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45c3661-e66b-41a2-9a98-db215df0b2cf-operator-scripts\") pod \"aodh-db-create-mjh66\" (UID: \"f45c3661-e66b-41a2-9a98-db215df0b2cf\") " pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.463922 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgpr\" (UniqueName: \"kubernetes.io/projected/7c902530-dc88-4300-9356-1f3938cfef4a-kube-api-access-wfgpr\") pod \"aodh-0d03-account-create-update-79bpm\" (UID: \"7c902530-dc88-4300-9356-1f3938cfef4a\") " pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.464108 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cdfk\" (UniqueName: \"kubernetes.io/projected/f45c3661-e66b-41a2-9a98-db215df0b2cf-kube-api-access-2cdfk\") pod \"aodh-db-create-mjh66\" (UID: \"f45c3661-e66b-41a2-9a98-db215df0b2cf\") " pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.464346 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c902530-dc88-4300-9356-1f3938cfef4a-operator-scripts\") pod \"aodh-0d03-account-create-update-79bpm\" (UID: \"7c902530-dc88-4300-9356-1f3938cfef4a\") " pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.566185 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgpr\" (UniqueName: \"kubernetes.io/projected/7c902530-dc88-4300-9356-1f3938cfef4a-kube-api-access-wfgpr\") pod \"aodh-0d03-account-create-update-79bpm\" (UID: \"7c902530-dc88-4300-9356-1f3938cfef4a\") " pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.566324 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cdfk\" (UniqueName: \"kubernetes.io/projected/f45c3661-e66b-41a2-9a98-db215df0b2cf-kube-api-access-2cdfk\") pod \"aodh-db-create-mjh66\" (UID: \"f45c3661-e66b-41a2-9a98-db215df0b2cf\") " pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.566387 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c902530-dc88-4300-9356-1f3938cfef4a-operator-scripts\") pod \"aodh-0d03-account-create-update-79bpm\" (UID: \"7c902530-dc88-4300-9356-1f3938cfef4a\") " pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.566489 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45c3661-e66b-41a2-9a98-db215df0b2cf-operator-scripts\") pod \"aodh-db-create-mjh66\" (UID: \"f45c3661-e66b-41a2-9a98-db215df0b2cf\") " pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.567296 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45c3661-e66b-41a2-9a98-db215df0b2cf-operator-scripts\") pod \"aodh-db-create-mjh66\" (UID: \"f45c3661-e66b-41a2-9a98-db215df0b2cf\") " pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.568581 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c902530-dc88-4300-9356-1f3938cfef4a-operator-scripts\") pod \"aodh-0d03-account-create-update-79bpm\" (UID: \"7c902530-dc88-4300-9356-1f3938cfef4a\") " pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.590503 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgpr\" (UniqueName: \"kubernetes.io/projected/7c902530-dc88-4300-9356-1f3938cfef4a-kube-api-access-wfgpr\") pod \"aodh-0d03-account-create-update-79bpm\" (UID: \"7c902530-dc88-4300-9356-1f3938cfef4a\") " pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.592014 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cdfk\" (UniqueName: \"kubernetes.io/projected/f45c3661-e66b-41a2-9a98-db215df0b2cf-kube-api-access-2cdfk\") pod \"aodh-db-create-mjh66\" (UID: \"f45c3661-e66b-41a2-9a98-db215df0b2cf\") " pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.638155 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.772564 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" event={"ID":"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff","Type":"ContainerStarted","Data":"4bd5dbe3f7a8b7903c6ee652f72c588239fb784e24ebb2f47f5ca3b9452668fa"} Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.796615 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" podStartSLOduration=2.527051698 podStartE2EDuration="12.796592214s" podCreationTimestamp="2026-02-02 10:54:37 +0000 UTC" firstStartedPulling="2026-02-02 10:54:38.568587739 +0000 UTC m=+1359.659989199" lastFinishedPulling="2026-02-02 10:54:48.838128265 +0000 UTC m=+1369.929529715" observedRunningTime="2026-02-02 10:54:49.787023228 +0000 UTC m=+1370.878424678" watchObservedRunningTime="2026-02-02 10:54:49.796592214 +0000 UTC m=+1370.887993664" Feb 02 10:54:49 crc kubenswrapper[4845]: I0202 10:54:49.873102 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:50 crc kubenswrapper[4845]: E0202 10:54:50.146932 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:50 crc kubenswrapper[4845]: I0202 10:54:50.423042 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0d03-account-create-update-79bpm"] Feb 02 10:54:50 crc kubenswrapper[4845]: I0202 10:54:50.548810 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-mjh66"] Feb 02 10:54:50 crc kubenswrapper[4845]: W0202 10:54:50.551111 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45c3661_e66b_41a2_9a98_db215df0b2cf.slice/crio-e6713628b33a9b6b74b19eb1bd6de884b5e4d7d9343c91923ecacc0283e77216 WatchSource:0}: Error finding container e6713628b33a9b6b74b19eb1bd6de884b5e4d7d9343c91923ecacc0283e77216: Status 404 returned error can't find the container with id e6713628b33a9b6b74b19eb1bd6de884b5e4d7d9343c91923ecacc0283e77216 Feb 02 10:54:50 crc kubenswrapper[4845]: I0202 10:54:50.786777 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mjh66" event={"ID":"f45c3661-e66b-41a2-9a98-db215df0b2cf","Type":"ContainerStarted","Data":"e6713628b33a9b6b74b19eb1bd6de884b5e4d7d9343c91923ecacc0283e77216"} Feb 02 10:54:50 crc kubenswrapper[4845]: I0202 10:54:50.789123 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0d03-account-create-update-79bpm" event={"ID":"7c902530-dc88-4300-9356-1f3938cfef4a","Type":"ContainerStarted","Data":"71cf5e6659540df9435ad78762669f09fe675a7a8c5a0e409f37393a959ab2c1"} Feb 02 10:54:51 crc kubenswrapper[4845]: I0202 10:54:51.802273 4845 generic.go:334] "Generic (PLEG): container finished" podID="f45c3661-e66b-41a2-9a98-db215df0b2cf" containerID="45867aad16ddef2c9f5a1df92c6f8cab6d7f01caa36027be8a34aecad2ed798b" exitCode=0 Feb 02 10:54:51 crc kubenswrapper[4845]: I0202 10:54:51.802394 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mjh66" event={"ID":"f45c3661-e66b-41a2-9a98-db215df0b2cf","Type":"ContainerDied","Data":"45867aad16ddef2c9f5a1df92c6f8cab6d7f01caa36027be8a34aecad2ed798b"} Feb 02 10:54:51 crc kubenswrapper[4845]: I0202 10:54:51.805318 4845 generic.go:334] "Generic (PLEG): container finished" podID="7c902530-dc88-4300-9356-1f3938cfef4a" containerID="7bd7e7f5964f9f073fd42624c4ca749f932b5e04b1db26f0a1d5d0f87ecbdb1f" exitCode=0 Feb 02 10:54:51 crc kubenswrapper[4845]: I0202 10:54:51.805355 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0d03-account-create-update-79bpm" event={"ID":"7c902530-dc88-4300-9356-1f3938cfef4a","Type":"ContainerDied","Data":"7bd7e7f5964f9f073fd42624c4ca749f932b5e04b1db26f0a1d5d0f87ecbdb1f"} Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.451076 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.458157 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.574002 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfgpr\" (UniqueName: \"kubernetes.io/projected/7c902530-dc88-4300-9356-1f3938cfef4a-kube-api-access-wfgpr\") pod \"7c902530-dc88-4300-9356-1f3938cfef4a\" (UID: \"7c902530-dc88-4300-9356-1f3938cfef4a\") " Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.574234 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cdfk\" (UniqueName: \"kubernetes.io/projected/f45c3661-e66b-41a2-9a98-db215df0b2cf-kube-api-access-2cdfk\") pod \"f45c3661-e66b-41a2-9a98-db215df0b2cf\" (UID: \"f45c3661-e66b-41a2-9a98-db215df0b2cf\") " Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.574396 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45c3661-e66b-41a2-9a98-db215df0b2cf-operator-scripts\") pod \"f45c3661-e66b-41a2-9a98-db215df0b2cf\" (UID: \"f45c3661-e66b-41a2-9a98-db215df0b2cf\") " Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.574436 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c902530-dc88-4300-9356-1f3938cfef4a-operator-scripts\") pod \"7c902530-dc88-4300-9356-1f3938cfef4a\" (UID: \"7c902530-dc88-4300-9356-1f3938cfef4a\") " Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.575470 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c902530-dc88-4300-9356-1f3938cfef4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c902530-dc88-4300-9356-1f3938cfef4a" (UID: "7c902530-dc88-4300-9356-1f3938cfef4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.575525 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f45c3661-e66b-41a2-9a98-db215df0b2cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f45c3661-e66b-41a2-9a98-db215df0b2cf" (UID: "f45c3661-e66b-41a2-9a98-db215df0b2cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.579714 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c902530-dc88-4300-9356-1f3938cfef4a-kube-api-access-wfgpr" (OuterVolumeSpecName: "kube-api-access-wfgpr") pod "7c902530-dc88-4300-9356-1f3938cfef4a" (UID: "7c902530-dc88-4300-9356-1f3938cfef4a"). InnerVolumeSpecName "kube-api-access-wfgpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.586174 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45c3661-e66b-41a2-9a98-db215df0b2cf-kube-api-access-2cdfk" (OuterVolumeSpecName: "kube-api-access-2cdfk") pod "f45c3661-e66b-41a2-9a98-db215df0b2cf" (UID: "f45c3661-e66b-41a2-9a98-db215df0b2cf"). InnerVolumeSpecName "kube-api-access-2cdfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.677111 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfgpr\" (UniqueName: \"kubernetes.io/projected/7c902530-dc88-4300-9356-1f3938cfef4a-kube-api-access-wfgpr\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.677152 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cdfk\" (UniqueName: \"kubernetes.io/projected/f45c3661-e66b-41a2-9a98-db215df0b2cf-kube-api-access-2cdfk\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.677165 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45c3661-e66b-41a2-9a98-db215df0b2cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.677177 4845 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c902530-dc88-4300-9356-1f3938cfef4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.835283 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mjh66" event={"ID":"f45c3661-e66b-41a2-9a98-db215df0b2cf","Type":"ContainerDied","Data":"e6713628b33a9b6b74b19eb1bd6de884b5e4d7d9343c91923ecacc0283e77216"} Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.835338 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6713628b33a9b6b74b19eb1bd6de884b5e4d7d9343c91923ecacc0283e77216" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.835402 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mjh66" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.840384 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0d03-account-create-update-79bpm" event={"ID":"7c902530-dc88-4300-9356-1f3938cfef4a","Type":"ContainerDied","Data":"71cf5e6659540df9435ad78762669f09fe675a7a8c5a0e409f37393a959ab2c1"} Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.840511 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71cf5e6659540df9435ad78762669f09fe675a7a8c5a0e409f37393a959ab2c1" Feb 02 10:54:53 crc kubenswrapper[4845]: I0202 10:54:53.840482 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0d03-account-create-update-79bpm" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.695341 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.710507 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-sg-core-conf-yaml\") pod \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.710557 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-scripts\") pod \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.710578 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-combined-ca-bundle\") pod \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.710640 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v49d\" (UniqueName: \"kubernetes.io/projected/baf4fb99-ffd9-4c16-b115-bbcb46c01096-kube-api-access-6v49d\") pod \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.710908 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-run-httpd\") pod \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.710989 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-log-httpd\") pod \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.711018 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-config-data\") pod \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\" (UID: \"baf4fb99-ffd9-4c16-b115-bbcb46c01096\") " Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.711281 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "baf4fb99-ffd9-4c16-b115-bbcb46c01096" (UID: "baf4fb99-ffd9-4c16-b115-bbcb46c01096"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.711382 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "baf4fb99-ffd9-4c16-b115-bbcb46c01096" (UID: "baf4fb99-ffd9-4c16-b115-bbcb46c01096"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.712245 4845 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.712266 4845 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf4fb99-ffd9-4c16-b115-bbcb46c01096-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.718361 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf4fb99-ffd9-4c16-b115-bbcb46c01096-kube-api-access-6v49d" (OuterVolumeSpecName: "kube-api-access-6v49d") pod "baf4fb99-ffd9-4c16-b115-bbcb46c01096" (UID: "baf4fb99-ffd9-4c16-b115-bbcb46c01096"). InnerVolumeSpecName "kube-api-access-6v49d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.736105 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-scripts" (OuterVolumeSpecName: "scripts") pod "baf4fb99-ffd9-4c16-b115-bbcb46c01096" (UID: "baf4fb99-ffd9-4c16-b115-bbcb46c01096"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.814227 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.814315 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v49d\" (UniqueName: \"kubernetes.io/projected/baf4fb99-ffd9-4c16-b115-bbcb46c01096-kube-api-access-6v49d\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.827007 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "baf4fb99-ffd9-4c16-b115-bbcb46c01096" (UID: "baf4fb99-ffd9-4c16-b115-bbcb46c01096"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.832286 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baf4fb99-ffd9-4c16-b115-bbcb46c01096" (UID: "baf4fb99-ffd9-4c16-b115-bbcb46c01096"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.854052 4845 generic.go:334] "Generic (PLEG): container finished" podID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerID="9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c" exitCode=137 Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.854135 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.854153 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerDied","Data":"9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c"} Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.854563 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf4fb99-ffd9-4c16-b115-bbcb46c01096","Type":"ContainerDied","Data":"2e307c045dfca6f5ca82d2578694038311524ca7900bb0337d06405f23ff2a24"} Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.854615 4845 scope.go:117] "RemoveContainer" containerID="9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.875782 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-config-data" (OuterVolumeSpecName: "config-data") pod "baf4fb99-ffd9-4c16-b115-bbcb46c01096" (UID: "baf4fb99-ffd9-4c16-b115-bbcb46c01096"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.916737 4845 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.917024 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.917100 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf4fb99-ffd9-4c16-b115-bbcb46c01096-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.939658 4845 scope.go:117] "RemoveContainer" containerID="dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.967946 4845 scope.go:117] "RemoveContainer" containerID="135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb" Feb 02 10:54:54 crc kubenswrapper[4845]: I0202 10:54:54.996693 4845 scope.go:117] "RemoveContainer" containerID="41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.026039 4845 scope.go:117] "RemoveContainer" containerID="9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.026525 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c\": container with ID starting with 9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c not found: ID does not exist" containerID="9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.026614 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c"} err="failed to get container status \"9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c\": rpc error: code = NotFound desc = could not find container \"9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c\": container with ID starting with 9180d33c29a258d5e5f21165e0791183ddb999a02f1388b14f51db73e9a8e68c not found: ID does not exist" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.026639 4845 scope.go:117] "RemoveContainer" containerID="dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.026833 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef\": container with ID starting with dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef not found: ID does not exist" containerID="dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.026859 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef"} err="failed to get container status \"dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef\": rpc error: code = NotFound desc = could not find container \"dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef\": container with ID starting with dd1e730df852af3cdad58ffb1d7d3871da56a6b229c97c97e197ccfa743aa0ef not found: ID does not exist" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.026872 4845 scope.go:117] "RemoveContainer" containerID="135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.031989 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb\": container with ID starting with 135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb not found: ID does not exist" containerID="135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.032035 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb"} err="failed to get container status \"135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb\": rpc error: code = NotFound desc = could not find container \"135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb\": container with ID starting with 135668b391056cee1086648142ef74df147dac60de9fce729184fb01bbd638eb not found: ID does not exist" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.032056 4845 scope.go:117] "RemoveContainer" containerID="41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.032480 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448\": container with ID starting with 41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448 not found: ID does not exist" containerID="41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.032517 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448"} err="failed to get container status \"41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448\": rpc error: code = NotFound desc = could not find container \"41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448\": container with ID starting with 41f2374677b587fdd24068f72bbd9d20bd2de324d32a42b0b98f76dd238c1448 not found: ID does not exist" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.207419 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.225418 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.242685 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.243373 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c902530-dc88-4300-9356-1f3938cfef4a" containerName="mariadb-account-create-update" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243403 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c902530-dc88-4300-9356-1f3938cfef4a" containerName="mariadb-account-create-update" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.243435 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="sg-core" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243442 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="sg-core" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.243454 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="ceilometer-notification-agent" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243460 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="ceilometer-notification-agent" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.243469 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45c3661-e66b-41a2-9a98-db215df0b2cf" containerName="mariadb-database-create" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243475 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45c3661-e66b-41a2-9a98-db215df0b2cf" containerName="mariadb-database-create" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.243489 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="proxy-httpd" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243495 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="proxy-httpd" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.243507 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="ceilometer-central-agent" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243513 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="ceilometer-central-agent" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243740 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="proxy-httpd" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243756 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="ceilometer-central-agent" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243772 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c902530-dc88-4300-9356-1f3938cfef4a" containerName="mariadb-account-create-update" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243780 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="ceilometer-notification-agent" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243789 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" containerName="sg-core" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.243807 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45c3661-e66b-41a2-9a98-db215df0b2cf" containerName="mariadb-database-create" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.246361 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.249252 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.249396 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.266585 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.326436 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.326475 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-run-httpd\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.326494 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-config-data\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.326516 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.326544 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bknng\" (UniqueName: \"kubernetes.io/projected/1c009cae-0016-4d35-9773-1e313feb5c4a-kube-api-access-bknng\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.326690 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-scripts\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.326974 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-log-httpd\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.428479 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-log-httpd\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.428658 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.428690 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-run-httpd\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.428714 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-config-data\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.428745 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.428784 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bknng\" (UniqueName: \"kubernetes.io/projected/1c009cae-0016-4d35-9773-1e313feb5c4a-kube-api-access-bknng\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.428837 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-scripts\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.430356 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-log-httpd\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.430425 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-run-httpd\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.435013 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.435202 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-scripts\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.435266 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.435631 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-config-data\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.453784 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bknng\" (UniqueName: \"kubernetes.io/projected/1c009cae-0016-4d35-9773-1e313feb5c4a-kube-api-access-bknng\") pod \"ceilometer-0\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: E0202 10:54:55.456935 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4430b5f_6421_41e2_b338_3b215c57957a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.587560 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:54:55 crc kubenswrapper[4845]: I0202 10:54:55.736409 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf4fb99-ffd9-4c16-b115-bbcb46c01096" path="/var/lib/kubelet/pods/baf4fb99-ffd9-4c16-b115-bbcb46c01096/volumes" Feb 02 10:54:56 crc kubenswrapper[4845]: I0202 10:54:56.183705 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:54:56 crc kubenswrapper[4845]: I0202 10:54:56.900459 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerStarted","Data":"4a2f52662f64ac4adafea886384b8aff903812b748685ed30b174caef644d27d"} Feb 02 10:54:57 crc kubenswrapper[4845]: I0202 10:54:57.912825 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerStarted","Data":"db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0"} Feb 02 10:54:57 crc kubenswrapper[4845]: I0202 10:54:57.913549 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerStarted","Data":"59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017"} Feb 02 10:54:58 crc kubenswrapper[4845]: I0202 10:54:58.933714 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerStarted","Data":"66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358"} Feb 02 10:55:01 crc kubenswrapper[4845]: I0202 10:55:01.973571 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerStarted","Data":"c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699"} Feb 02 10:55:01 crc kubenswrapper[4845]: I0202 10:55:01.974522 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:55:02 crc kubenswrapper[4845]: I0202 10:55:02.002067 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.372800197 podStartE2EDuration="7.002041463s" podCreationTimestamp="2026-02-02 10:54:55 +0000 UTC" firstStartedPulling="2026-02-02 10:54:56.153524148 +0000 UTC m=+1377.244925598" lastFinishedPulling="2026-02-02 10:55:00.782765414 +0000 UTC m=+1381.874166864" observedRunningTime="2026-02-02 10:55:01.999660744 +0000 UTC m=+1383.091062194" watchObservedRunningTime="2026-02-02 10:55:02.002041463 +0000 UTC m=+1383.093442913" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.028251 4845 generic.go:334] "Generic (PLEG): container finished" podID="08e1823d-46cd-40c5-bea1-162473f9a4ce" containerID="9eb0cc21db22e7b0a6e9194e496c67f804362485d557f665096269f5e604e637" exitCode=137 Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.028453 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b9b757468-zfd7s" event={"ID":"08e1823d-46cd-40c5-bea1-162473f9a4ce","Type":"ContainerDied","Data":"9eb0cc21db22e7b0a6e9194e496c67f804362485d557f665096269f5e604e637"} Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.028808 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b9b757468-zfd7s" event={"ID":"08e1823d-46cd-40c5-bea1-162473f9a4ce","Type":"ContainerDied","Data":"99d07802bcf1c0ba9e3fd5b262b073cf214a20ee31d4f7170adba1e4c7702081"} Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.028834 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d07802bcf1c0ba9e3fd5b262b073cf214a20ee31d4f7170adba1e4c7702081" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.030628 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.194444 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data\") pod \"08e1823d-46cd-40c5-bea1-162473f9a4ce\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.194854 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjss5\" (UniqueName: \"kubernetes.io/projected/08e1823d-46cd-40c5-bea1-162473f9a4ce-kube-api-access-vjss5\") pod \"08e1823d-46cd-40c5-bea1-162473f9a4ce\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.195002 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-combined-ca-bundle\") pod \"08e1823d-46cd-40c5-bea1-162473f9a4ce\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.195087 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data-custom\") pod \"08e1823d-46cd-40c5-bea1-162473f9a4ce\" (UID: \"08e1823d-46cd-40c5-bea1-162473f9a4ce\") " Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.200110 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08e1823d-46cd-40c5-bea1-162473f9a4ce" (UID: "08e1823d-46cd-40c5-bea1-162473f9a4ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.200962 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e1823d-46cd-40c5-bea1-162473f9a4ce-kube-api-access-vjss5" (OuterVolumeSpecName: "kube-api-access-vjss5") pod "08e1823d-46cd-40c5-bea1-162473f9a4ce" (UID: "08e1823d-46cd-40c5-bea1-162473f9a4ce"). InnerVolumeSpecName "kube-api-access-vjss5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.225715 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08e1823d-46cd-40c5-bea1-162473f9a4ce" (UID: "08e1823d-46cd-40c5-bea1-162473f9a4ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.255645 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data" (OuterVolumeSpecName: "config-data") pod "08e1823d-46cd-40c5-bea1-162473f9a4ce" (UID: "08e1823d-46cd-40c5-bea1-162473f9a4ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.298809 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjss5\" (UniqueName: \"kubernetes.io/projected/08e1823d-46cd-40c5-bea1-162473f9a4ce-kube-api-access-vjss5\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.298851 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.298864 4845 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:07 crc kubenswrapper[4845]: I0202 10:55:07.299162 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e1823d-46cd-40c5-bea1-162473f9a4ce-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:08 crc kubenswrapper[4845]: I0202 10:55:08.040483 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b9b757468-zfd7s" Feb 02 10:55:08 crc kubenswrapper[4845]: I0202 10:55:08.070110 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-b9b757468-zfd7s"] Feb 02 10:55:08 crc kubenswrapper[4845]: I0202 10:55:08.088542 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-b9b757468-zfd7s"] Feb 02 10:55:09 crc kubenswrapper[4845]: I0202 10:55:09.726840 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e1823d-46cd-40c5-bea1-162473f9a4ce" path="/var/lib/kubelet/pods/08e1823d-46cd-40c5-bea1-162473f9a4ce/volumes" Feb 02 10:55:14 crc kubenswrapper[4845]: I0202 10:55:14.112117 4845 generic.go:334] "Generic (PLEG): container finished" podID="cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" containerID="4bd5dbe3f7a8b7903c6ee652f72c588239fb784e24ebb2f47f5ca3b9452668fa" exitCode=0 Feb 02 10:55:14 crc kubenswrapper[4845]: I0202 10:55:14.112211 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" event={"ID":"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff","Type":"ContainerDied","Data":"4bd5dbe3f7a8b7903c6ee652f72c588239fb784e24ebb2f47f5ca3b9452668fa"} Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.519916 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.610494 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-combined-ca-bundle\") pod \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.610663 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkklk\" (UniqueName: \"kubernetes.io/projected/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-kube-api-access-fkklk\") pod \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.610696 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-config-data\") pod \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.610727 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-scripts\") pod \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\" (UID: \"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff\") " Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.615924 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-scripts" (OuterVolumeSpecName: "scripts") pod "cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" (UID: "cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.616965 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-kube-api-access-fkklk" (OuterVolumeSpecName: "kube-api-access-fkklk") pod "cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" (UID: "cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff"). InnerVolumeSpecName "kube-api-access-fkklk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.642830 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" (UID: "cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.646080 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-config-data" (OuterVolumeSpecName: "config-data") pod "cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" (UID: "cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.713046 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkklk\" (UniqueName: \"kubernetes.io/projected/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-kube-api-access-fkklk\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.713275 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.713384 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:15 crc kubenswrapper[4845]: I0202 10:55:15.713455 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.134689 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" event={"ID":"cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff","Type":"ContainerDied","Data":"7649afe14ec5ab0dbef279c4cc985147cdb3a346b125d477468bb7cabfb65001"} Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.134979 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7649afe14ec5ab0dbef279c4cc985147cdb3a346b125d477468bb7cabfb65001" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.134820 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qwpzq" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.315689 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:55:16 crc kubenswrapper[4845]: E0202 10:55:16.316278 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e1823d-46cd-40c5-bea1-162473f9a4ce" containerName="heat-cfnapi" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.316305 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e1823d-46cd-40c5-bea1-162473f9a4ce" containerName="heat-cfnapi" Feb 02 10:55:16 crc kubenswrapper[4845]: E0202 10:55:16.316329 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" containerName="nova-cell0-conductor-db-sync" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.316339 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" containerName="nova-cell0-conductor-db-sync" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.316612 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" containerName="nova-cell0-conductor-db-sync" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.316676 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e1823d-46cd-40c5-bea1-162473f9a4ce" containerName="heat-cfnapi" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.317522 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.319117 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8lh6g" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.319602 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.339661 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.429125 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5e3b4b-9a44-4b50-8799-71f869de9028-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.429293 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6zs6\" (UniqueName: \"kubernetes.io/projected/af5e3b4b-9a44-4b50-8799-71f869de9028-kube-api-access-q6zs6\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.429327 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5e3b4b-9a44-4b50-8799-71f869de9028-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.531992 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5e3b4b-9a44-4b50-8799-71f869de9028-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.532171 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6zs6\" (UniqueName: \"kubernetes.io/projected/af5e3b4b-9a44-4b50-8799-71f869de9028-kube-api-access-q6zs6\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.532208 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5e3b4b-9a44-4b50-8799-71f869de9028-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.537455 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5e3b4b-9a44-4b50-8799-71f869de9028-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.542077 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5e3b4b-9a44-4b50-8799-71f869de9028-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.557323 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6zs6\" (UniqueName: \"kubernetes.io/projected/af5e3b4b-9a44-4b50-8799-71f869de9028-kube-api-access-q6zs6\") pod \"nova-cell0-conductor-0\" (UID: \"af5e3b4b-9a44-4b50-8799-71f869de9028\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:16 crc kubenswrapper[4845]: I0202 10:55:16.642945 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:17 crc kubenswrapper[4845]: I0202 10:55:17.146412 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:55:18 crc kubenswrapper[4845]: I0202 10:55:18.160649 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"af5e3b4b-9a44-4b50-8799-71f869de9028","Type":"ContainerStarted","Data":"2d72ecf8fc97f76106c058686a64dffee6b2ab0e9293464960787ca9a2163690"} Feb 02 10:55:18 crc kubenswrapper[4845]: I0202 10:55:18.161010 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"af5e3b4b-9a44-4b50-8799-71f869de9028","Type":"ContainerStarted","Data":"d8a9f03eec02b52248ed5434c4d3a71e0901cf641c7aee807f3852b772bc941f"} Feb 02 10:55:18 crc kubenswrapper[4845]: I0202 10:55:18.162660 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:18 crc kubenswrapper[4845]: I0202 10:55:18.187561 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.187543767 podStartE2EDuration="2.187543767s" podCreationTimestamp="2026-02-02 10:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:18.180639398 +0000 UTC m=+1399.272040858" watchObservedRunningTime="2026-02-02 10:55:18.187543767 +0000 UTC m=+1399.278945217" Feb 02 10:55:25 crc kubenswrapper[4845]: I0202 10:55:25.598684 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 10:55:26 crc kubenswrapper[4845]: I0202 10:55:26.714452 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.142431 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-725jn"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.144005 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.147139 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.155159 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.168301 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-725jn"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.215935 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-scripts\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.215985 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.216015 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282l4\" (UniqueName: \"kubernetes.io/projected/7439e987-75e8-4cc8-840a-742c6f07dea9-kube-api-access-282l4\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.216325 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-config-data\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.321693 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-config-data\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.321905 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-scripts\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.321959 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.322015 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-282l4\" (UniqueName: \"kubernetes.io/projected/7439e987-75e8-4cc8-840a-742c6f07dea9-kube-api-access-282l4\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.340332 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-config-data\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.343320 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-282l4\" (UniqueName: \"kubernetes.io/projected/7439e987-75e8-4cc8-840a-742c6f07dea9-kube-api-access-282l4\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.348911 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-scripts\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.349766 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-725jn\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.423007 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.428700 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.437642 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.479860 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.489653 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.491782 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.501608 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.524116 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.554405 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchv8\" (UniqueName: \"kubernetes.io/projected/516e4c98-314a-4116-b0fc-45c18fd1c7e1-kube-api-access-gchv8\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.554559 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-config-data\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.554648 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgljn\" (UniqueName: \"kubernetes.io/projected/1498a0e1-1035-4eba-bbc5-169cd1de86a0-kube-api-access-zgljn\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.554823 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.554878 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.554967 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-config-data\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.555209 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1498a0e1-1035-4eba-bbc5-169cd1de86a0-logs\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.555437 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.657549 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchv8\" (UniqueName: \"kubernetes.io/projected/516e4c98-314a-4116-b0fc-45c18fd1c7e1-kube-api-access-gchv8\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.657638 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-config-data\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.657689 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgljn\" (UniqueName: \"kubernetes.io/projected/1498a0e1-1035-4eba-bbc5-169cd1de86a0-kube-api-access-zgljn\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.657778 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.657812 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.657837 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-config-data\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.657972 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1498a0e1-1035-4eba-bbc5-169cd1de86a0-logs\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.667147 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1498a0e1-1035-4eba-bbc5-169cd1de86a0-logs\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.677205 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.678704 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-config-data\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.688519 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-config-data\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.688627 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.690618 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.696475 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.696855 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.709440 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchv8\" (UniqueName: \"kubernetes.io/projected/516e4c98-314a-4116-b0fc-45c18fd1c7e1-kube-api-access-gchv8\") pod \"nova-scheduler-0\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.725062 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgljn\" (UniqueName: \"kubernetes.io/projected/1498a0e1-1035-4eba-bbc5-169cd1de86a0-kube-api-access-zgljn\") pod \"nova-api-0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.763396 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-config-data\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.767011 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.774483 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.774837 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca5ade4-10bb-4dc2-83c9-546c778230b1-logs\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.775386 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9dl5\" (UniqueName: \"kubernetes.io/projected/aca5ade4-10bb-4dc2-83c9-546c778230b1-kube-api-access-t9dl5\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.791242 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.791314 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.803366 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.804720 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.810772 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.839382 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hvdzc"] Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.864362 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.879180 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-config-data\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.879348 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.879407 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca5ade4-10bb-4dc2-83c9-546c778230b1-logs\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.879538 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9dl5\" (UniqueName: \"kubernetes.io/projected/aca5ade4-10bb-4dc2-83c9-546c778230b1-kube-api-access-t9dl5\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.890589 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.890957 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca5ade4-10bb-4dc2-83c9-546c778230b1-logs\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.904491 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9dl5\" (UniqueName: \"kubernetes.io/projected/aca5ade4-10bb-4dc2-83c9-546c778230b1-kube-api-access-t9dl5\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.911351 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-config-data\") pod \"nova-metadata-0\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " pod="openstack/nova-metadata-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.948548 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:55:27 crc kubenswrapper[4845]: I0202 10:55:27.997964 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hvdzc"] Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.011946 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.012011 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.013144 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-config\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.013254 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.013817 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27jh\" (UniqueName: \"kubernetes.io/projected/83cd6f6d-3615-46e0-875a-e1cec10e9631-kube-api-access-f27jh\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.013848 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gzv\" (UniqueName: \"kubernetes.io/projected/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-kube-api-access-p7gzv\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.014242 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.014565 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.014622 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117314 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f27jh\" (UniqueName: \"kubernetes.io/projected/83cd6f6d-3615-46e0-875a-e1cec10e9631-kube-api-access-f27jh\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117354 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gzv\" (UniqueName: \"kubernetes.io/projected/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-kube-api-access-p7gzv\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117395 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117433 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117450 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117499 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117521 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117556 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-config\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.117586 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.119016 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.119812 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.122819 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-config\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.123970 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.124455 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.124702 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.125395 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.130057 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.149320 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f27jh\" (UniqueName: \"kubernetes.io/projected/83cd6f6d-3615-46e0-875a-e1cec10e9631-kube-api-access-f27jh\") pod \"dnsmasq-dns-9b86998b5-hvdzc\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.155148 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gzv\" (UniqueName: \"kubernetes.io/projected/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-kube-api-access-p7gzv\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.225732 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.311861 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-725jn"] Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.444341 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.603572 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.753088 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r4lj7"] Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.754828 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.760380 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.760652 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.785431 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r4lj7"] Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.853865 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-scripts\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.854164 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7jh8\" (UniqueName: \"kubernetes.io/projected/7b2bad3a-8153-41d8-83f6-9f9caa16589b-kube-api-access-l7jh8\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.855713 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-config-data\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.856265 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.884962 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.903210 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.973617 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-scripts\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.973716 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7jh8\" (UniqueName: \"kubernetes.io/projected/7b2bad3a-8153-41d8-83f6-9f9caa16589b-kube-api-access-l7jh8\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.973800 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-config-data\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:28 crc kubenswrapper[4845]: I0202 10:55:28.973926 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:28.998234 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-scripts\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:28.999485 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.003146 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-config-data\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.029612 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7jh8\" (UniqueName: \"kubernetes.io/projected/7b2bad3a-8153-41d8-83f6-9f9caa16589b-kube-api-access-l7jh8\") pod \"nova-cell1-conductor-db-sync-r4lj7\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.091832 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.291723 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hvdzc"] Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.294611 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aca5ade4-10bb-4dc2-83c9-546c778230b1","Type":"ContainerStarted","Data":"796d16b6c2b2f99942af8c650f6fd9074569009a172c9c4df05a46809ca9e640"} Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.307811 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1498a0e1-1035-4eba-bbc5-169cd1de86a0","Type":"ContainerStarted","Data":"f57f9e17a253f3027639c6870894363d5912e86c0ad65390204a822d4d8ff33a"} Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.316266 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"516e4c98-314a-4116-b0fc-45c18fd1c7e1","Type":"ContainerStarted","Data":"ad10cf81d533e8b37ef72f4ca4f01fc9f32600963e1feb8c42617710f205cfdd"} Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.318712 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-725jn" event={"ID":"7439e987-75e8-4cc8-840a-742c6f07dea9","Type":"ContainerStarted","Data":"7ebfe4b502f94b649860108409e8e9b93586764606d6f176206def30cf86a61d"} Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.318739 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-725jn" event={"ID":"7439e987-75e8-4cc8-840a-742c6f07dea9","Type":"ContainerStarted","Data":"b69a35d9b6473cf3a175810e25cf4dd4eb5c8693efc5afa386457c19bc88440b"} Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.373551 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-725jn" podStartSLOduration=2.373523951 podStartE2EDuration="2.373523951s" podCreationTimestamp="2026-02-02 10:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:29.345183175 +0000 UTC m=+1410.436584645" watchObservedRunningTime="2026-02-02 10:55:29.373523951 +0000 UTC m=+1410.464925401" Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.445694 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:55:29 crc kubenswrapper[4845]: I0202 10:55:29.876651 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r4lj7"] Feb 02 10:55:30 crc kubenswrapper[4845]: I0202 10:55:30.339141 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98","Type":"ContainerStarted","Data":"fe04f70476c707b2543738532ef74000620c8c2924f735fde10bff5d95053cb3"} Feb 02 10:55:30 crc kubenswrapper[4845]: I0202 10:55:30.353319 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" event={"ID":"7b2bad3a-8153-41d8-83f6-9f9caa16589b","Type":"ContainerStarted","Data":"f926f2a33b190bee678900de2f6fcfec869f9b6aacd707a0eedb2404459e01e7"} Feb 02 10:55:30 crc kubenswrapper[4845]: I0202 10:55:30.353367 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" event={"ID":"7b2bad3a-8153-41d8-83f6-9f9caa16589b","Type":"ContainerStarted","Data":"da194489804af83cf7bd974e17436580f313625e8bbc411b3359df8752a517c2"} Feb 02 10:55:30 crc kubenswrapper[4845]: I0202 10:55:30.387265 4845 generic.go:334] "Generic (PLEG): container finished" podID="83cd6f6d-3615-46e0-875a-e1cec10e9631" containerID="f824758afe3c663578c7666b1f094db1caee520920ba737c52e548b07f3ee9ad" exitCode=0 Feb 02 10:55:30 crc kubenswrapper[4845]: I0202 10:55:30.387409 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" event={"ID":"83cd6f6d-3615-46e0-875a-e1cec10e9631","Type":"ContainerDied","Data":"f824758afe3c663578c7666b1f094db1caee520920ba737c52e548b07f3ee9ad"} Feb 02 10:55:30 crc kubenswrapper[4845]: I0202 10:55:30.387474 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" event={"ID":"83cd6f6d-3615-46e0-875a-e1cec10e9631","Type":"ContainerStarted","Data":"5459dadbda58e0ce878baeea7244a3a46fe1a38ff8b8032f5afcf3e9f7c8bd0d"} Feb 02 10:55:30 crc kubenswrapper[4845]: I0202 10:55:30.390278 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" podStartSLOduration=2.390254938 podStartE2EDuration="2.390254938s" podCreationTimestamp="2026-02-02 10:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:30.379440086 +0000 UTC m=+1411.470841536" watchObservedRunningTime="2026-02-02 10:55:30.390254938 +0000 UTC m=+1411.481656388" Feb 02 10:55:31 crc kubenswrapper[4845]: I0202 10:55:31.298346 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:55:31 crc kubenswrapper[4845]: I0202 10:55:31.330958 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:31 crc kubenswrapper[4845]: I0202 10:55:31.403957 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" event={"ID":"83cd6f6d-3615-46e0-875a-e1cec10e9631","Type":"ContainerStarted","Data":"4588a5acb74ef82abd161d26d28f375cb1f56efac97a86455f032c5662e7188e"} Feb 02 10:55:31 crc kubenswrapper[4845]: I0202 10:55:31.404530 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:31 crc kubenswrapper[4845]: I0202 10:55:31.427141 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" podStartSLOduration=4.427120114 podStartE2EDuration="4.427120114s" podCreationTimestamp="2026-02-02 10:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:31.424989893 +0000 UTC m=+1412.516391343" watchObservedRunningTime="2026-02-02 10:55:31.427120114 +0000 UTC m=+1412.518521564" Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.434947 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98","Type":"ContainerStarted","Data":"f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad"} Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.435071 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad" gracePeriod=30 Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.437573 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"516e4c98-314a-4116-b0fc-45c18fd1c7e1","Type":"ContainerStarted","Data":"86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22"} Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.440167 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aca5ade4-10bb-4dc2-83c9-546c778230b1","Type":"ContainerStarted","Data":"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4"} Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.440212 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerName="nova-metadata-log" containerID="cri-o://e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8" gracePeriod=30 Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.440222 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerName="nova-metadata-metadata" containerID="cri-o://3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4" gracePeriod=30 Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.440230 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aca5ade4-10bb-4dc2-83c9-546c778230b1","Type":"ContainerStarted","Data":"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8"} Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.449160 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1498a0e1-1035-4eba-bbc5-169cd1de86a0","Type":"ContainerStarted","Data":"7371036457c908a60d98a53f34d54f0d70618efbebcf97fbc3e3f1b041ae7110"} Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.449213 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1498a0e1-1035-4eba-bbc5-169cd1de86a0","Type":"ContainerStarted","Data":"54419ef52d04b18470afc9cd9fe1e7776568928b396efb56b1f79342767e7b05"} Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.475325 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.707314967 podStartE2EDuration="7.475300974s" podCreationTimestamp="2026-02-02 10:55:27 +0000 UTC" firstStartedPulling="2026-02-02 10:55:29.592107325 +0000 UTC m=+1410.683508775" lastFinishedPulling="2026-02-02 10:55:33.360093342 +0000 UTC m=+1414.451494782" observedRunningTime="2026-02-02 10:55:34.463385021 +0000 UTC m=+1415.554786471" watchObservedRunningTime="2026-02-02 10:55:34.475300974 +0000 UTC m=+1415.566702434" Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.498504 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.121617453 podStartE2EDuration="7.498477572s" podCreationTimestamp="2026-02-02 10:55:27 +0000 UTC" firstStartedPulling="2026-02-02 10:55:28.955381072 +0000 UTC m=+1410.046782522" lastFinishedPulling="2026-02-02 10:55:33.332241201 +0000 UTC m=+1414.423642641" observedRunningTime="2026-02-02 10:55:34.484433727 +0000 UTC m=+1415.575835177" watchObservedRunningTime="2026-02-02 10:55:34.498477572 +0000 UTC m=+1415.589879022" Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.511291 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.830300344 podStartE2EDuration="7.51127178s" podCreationTimestamp="2026-02-02 10:55:27 +0000 UTC" firstStartedPulling="2026-02-02 10:55:28.64208043 +0000 UTC m=+1409.733481880" lastFinishedPulling="2026-02-02 10:55:33.323051866 +0000 UTC m=+1414.414453316" observedRunningTime="2026-02-02 10:55:34.504755523 +0000 UTC m=+1415.596156973" watchObservedRunningTime="2026-02-02 10:55:34.51127178 +0000 UTC m=+1415.602673230" Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.529049 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.1610733890000002 podStartE2EDuration="7.529025701s" podCreationTimestamp="2026-02-02 10:55:27 +0000 UTC" firstStartedPulling="2026-02-02 10:55:28.955502045 +0000 UTC m=+1410.046903495" lastFinishedPulling="2026-02-02 10:55:33.323454357 +0000 UTC m=+1414.414855807" observedRunningTime="2026-02-02 10:55:34.5199662 +0000 UTC m=+1415.611367650" watchObservedRunningTime="2026-02-02 10:55:34.529025701 +0000 UTC m=+1415.620427151" Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.892466 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:34 crc kubenswrapper[4845]: I0202 10:55:34.893413 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31" containerName="kube-state-metrics" containerID="cri-o://cacef27d0dbc8696a18778de5ec2dfe7815ec16e5c7a5beb939fff7f7c4e5a61" gracePeriod=30 Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.128130 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.128603 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="ada4f3a2-2715-4c0c-bc32-5c488a2e1996" containerName="mysqld-exporter" containerID="cri-o://995f2b7a30c667d098b78f0a5f78fb72fb30f1f5da96bbc6a50e3a4f536e40bb" gracePeriod=30 Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.240790 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.290534 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca5ade4-10bb-4dc2-83c9-546c778230b1-logs\") pod \"aca5ade4-10bb-4dc2-83c9-546c778230b1\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.290641 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-combined-ca-bundle\") pod \"aca5ade4-10bb-4dc2-83c9-546c778230b1\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.290673 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-config-data\") pod \"aca5ade4-10bb-4dc2-83c9-546c778230b1\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.290754 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9dl5\" (UniqueName: \"kubernetes.io/projected/aca5ade4-10bb-4dc2-83c9-546c778230b1-kube-api-access-t9dl5\") pod \"aca5ade4-10bb-4dc2-83c9-546c778230b1\" (UID: \"aca5ade4-10bb-4dc2-83c9-546c778230b1\") " Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.291229 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca5ade4-10bb-4dc2-83c9-546c778230b1-logs" (OuterVolumeSpecName: "logs") pod "aca5ade4-10bb-4dc2-83c9-546c778230b1" (UID: "aca5ade4-10bb-4dc2-83c9-546c778230b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.291857 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca5ade4-10bb-4dc2-83c9-546c778230b1-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.298501 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca5ade4-10bb-4dc2-83c9-546c778230b1-kube-api-access-t9dl5" (OuterVolumeSpecName: "kube-api-access-t9dl5") pod "aca5ade4-10bb-4dc2-83c9-546c778230b1" (UID: "aca5ade4-10bb-4dc2-83c9-546c778230b1"). InnerVolumeSpecName "kube-api-access-t9dl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.338166 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aca5ade4-10bb-4dc2-83c9-546c778230b1" (UID: "aca5ade4-10bb-4dc2-83c9-546c778230b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.350864 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-config-data" (OuterVolumeSpecName: "config-data") pod "aca5ade4-10bb-4dc2-83c9-546c778230b1" (UID: "aca5ade4-10bb-4dc2-83c9-546c778230b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.395997 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.396031 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca5ade4-10bb-4dc2-83c9-546c778230b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.396043 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9dl5\" (UniqueName: \"kubernetes.io/projected/aca5ade4-10bb-4dc2-83c9-546c778230b1-kube-api-access-t9dl5\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.495787 4845 generic.go:334] "Generic (PLEG): container finished" podID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerID="3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4" exitCode=0 Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.495825 4845 generic.go:334] "Generic (PLEG): container finished" podID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerID="e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8" exitCode=143 Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.495915 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aca5ade4-10bb-4dc2-83c9-546c778230b1","Type":"ContainerDied","Data":"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4"} Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.495951 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aca5ade4-10bb-4dc2-83c9-546c778230b1","Type":"ContainerDied","Data":"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8"} Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.495964 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aca5ade4-10bb-4dc2-83c9-546c778230b1","Type":"ContainerDied","Data":"796d16b6c2b2f99942af8c650f6fd9074569009a172c9c4df05a46809ca9e640"} Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.495981 4845 scope.go:117] "RemoveContainer" containerID="3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.496157 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.512243 4845 generic.go:334] "Generic (PLEG): container finished" podID="ada4f3a2-2715-4c0c-bc32-5c488a2e1996" containerID="995f2b7a30c667d098b78f0a5f78fb72fb30f1f5da96bbc6a50e3a4f536e40bb" exitCode=2 Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.512337 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ada4f3a2-2715-4c0c-bc32-5c488a2e1996","Type":"ContainerDied","Data":"995f2b7a30c667d098b78f0a5f78fb72fb30f1f5da96bbc6a50e3a4f536e40bb"} Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.531160 4845 generic.go:334] "Generic (PLEG): container finished" podID="a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31" containerID="cacef27d0dbc8696a18778de5ec2dfe7815ec16e5c7a5beb939fff7f7c4e5a61" exitCode=2 Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.532456 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31","Type":"ContainerDied","Data":"cacef27d0dbc8696a18778de5ec2dfe7815ec16e5c7a5beb939fff7f7c4e5a61"} Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.603078 4845 scope.go:117] "RemoveContainer" containerID="e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.610508 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.665777 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.762535 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" path="/var/lib/kubelet/pods/aca5ade4-10bb-4dc2-83c9-546c778230b1/volumes" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.764649 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:35 crc kubenswrapper[4845]: E0202 10:55:35.765140 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerName="nova-metadata-log" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.765159 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerName="nova-metadata-log" Feb 02 10:55:35 crc kubenswrapper[4845]: E0202 10:55:35.765191 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerName="nova-metadata-metadata" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.765201 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerName="nova-metadata-metadata" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.765489 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerName="nova-metadata-log" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.765522 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca5ade4-10bb-4dc2-83c9-546c778230b1" containerName="nova-metadata-metadata" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.771196 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.772860 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.777761 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.779374 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.785380 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.877298 4845 scope.go:117] "RemoveContainer" containerID="3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4" Feb 02 10:55:35 crc kubenswrapper[4845]: E0202 10:55:35.881317 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4\": container with ID starting with 3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4 not found: ID does not exist" containerID="3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.881376 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4"} err="failed to get container status \"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4\": rpc error: code = NotFound desc = could not find container \"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4\": container with ID starting with 3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4 not found: ID does not exist" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.881406 4845 scope.go:117] "RemoveContainer" containerID="e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8" Feb 02 10:55:35 crc kubenswrapper[4845]: E0202 10:55:35.885396 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8\": container with ID starting with e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8 not found: ID does not exist" containerID="e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.885434 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8"} err="failed to get container status \"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8\": rpc error: code = NotFound desc = could not find container \"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8\": container with ID starting with e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8 not found: ID does not exist" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.885461 4845 scope.go:117] "RemoveContainer" containerID="3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.885665 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4"} err="failed to get container status \"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4\": rpc error: code = NotFound desc = could not find container \"3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4\": container with ID starting with 3b4016fad380e8d5250c22e00f2cccbbc1b2ac3899c47e7955ec48e2348141c4 not found: ID does not exist" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.885679 4845 scope.go:117] "RemoveContainer" containerID="e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.885844 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8"} err="failed to get container status \"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8\": rpc error: code = NotFound desc = could not find container \"e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8\": container with ID starting with e096766b4b6ca60fad6f519a167619715093a61239745550dfa1ef9860d2d4a8 not found: ID does not exist" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.933373 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr6hd\" (UniqueName: \"kubernetes.io/projected/a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31-kube-api-access-pr6hd\") pod \"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31\" (UID: \"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31\") " Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.935138 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-config-data\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.935214 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rlf\" (UniqueName: \"kubernetes.io/projected/62eefe51-d633-47bd-b7b8-1b786cc8bdde-kube-api-access-v8rlf\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.935635 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.936029 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.936154 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eefe51-d633-47bd-b7b8-1b786cc8bdde-logs\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:35 crc kubenswrapper[4845]: I0202 10:55:35.953692 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31-kube-api-access-pr6hd" (OuterVolumeSpecName: "kube-api-access-pr6hd") pod "a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31" (UID: "a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31"). InnerVolumeSpecName "kube-api-access-pr6hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.043615 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eefe51-d633-47bd-b7b8-1b786cc8bdde-logs\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.043723 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-config-data\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.043750 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rlf\" (UniqueName: \"kubernetes.io/projected/62eefe51-d633-47bd-b7b8-1b786cc8bdde-kube-api-access-v8rlf\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.043844 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.043955 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.044042 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr6hd\" (UniqueName: \"kubernetes.io/projected/a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31-kube-api-access-pr6hd\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.044689 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eefe51-d633-47bd-b7b8-1b786cc8bdde-logs\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.050498 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.051149 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-config-data\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.052212 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.062749 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rlf\" (UniqueName: \"kubernetes.io/projected/62eefe51-d633-47bd-b7b8-1b786cc8bdde-kube-api-access-v8rlf\") pod \"nova-metadata-0\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.063060 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.104744 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.260757 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-combined-ca-bundle\") pod \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.260867 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-config-data\") pod \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.261016 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zjmd\" (UniqueName: \"kubernetes.io/projected/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-kube-api-access-5zjmd\") pod \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\" (UID: \"ada4f3a2-2715-4c0c-bc32-5c488a2e1996\") " Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.275124 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-kube-api-access-5zjmd" (OuterVolumeSpecName: "kube-api-access-5zjmd") pod "ada4f3a2-2715-4c0c-bc32-5c488a2e1996" (UID: "ada4f3a2-2715-4c0c-bc32-5c488a2e1996"). InnerVolumeSpecName "kube-api-access-5zjmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.343154 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ada4f3a2-2715-4c0c-bc32-5c488a2e1996" (UID: "ada4f3a2-2715-4c0c-bc32-5c488a2e1996"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.364908 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.364946 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zjmd\" (UniqueName: \"kubernetes.io/projected/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-kube-api-access-5zjmd\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.385200 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-config-data" (OuterVolumeSpecName: "config-data") pod "ada4f3a2-2715-4c0c-bc32-5c488a2e1996" (UID: "ada4f3a2-2715-4c0c-bc32-5c488a2e1996"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.467042 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada4f3a2-2715-4c0c-bc32-5c488a2e1996-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.544340 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31","Type":"ContainerDied","Data":"e2f9a94d7e921b062f55ff2afa3f02c589bdda200432ed3b8b900c15b062f04e"} Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.544380 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.544427 4845 scope.go:117] "RemoveContainer" containerID="cacef27d0dbc8696a18778de5ec2dfe7815ec16e5c7a5beb939fff7f7c4e5a61" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.558169 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ada4f3a2-2715-4c0c-bc32-5c488a2e1996","Type":"ContainerDied","Data":"4f3a714246c19659b6b750ef01e05f53d25576adc8d172e3db972b276255f379"} Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.558340 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.589792 4845 scope.go:117] "RemoveContainer" containerID="995f2b7a30c667d098b78f0a5f78fb72fb30f1f5da96bbc6a50e3a4f536e40bb" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.590188 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.631541 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.646447 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.656522 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.666667 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: E0202 10:55:36.667575 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada4f3a2-2715-4c0c-bc32-5c488a2e1996" containerName="mysqld-exporter" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.667608 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada4f3a2-2715-4c0c-bc32-5c488a2e1996" containerName="mysqld-exporter" Feb 02 10:55:36 crc kubenswrapper[4845]: E0202 10:55:36.667629 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31" containerName="kube-state-metrics" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.667637 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31" containerName="kube-state-metrics" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.667904 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31" containerName="kube-state-metrics" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.667942 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada4f3a2-2715-4c0c-bc32-5c488a2e1996" containerName="mysqld-exporter" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.689123 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.692535 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.692634 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.696031 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.696327 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.696590 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.696762 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.696848 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.708044 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.728812 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.885456 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.885565 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.885623 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v5g6\" (UniqueName: \"kubernetes.io/projected/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-api-access-7v5g6\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.885679 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-config-data\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.885828 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4h9q\" (UniqueName: \"kubernetes.io/projected/6704fdd3-f589-4ccd-9a52-4a914e219b09-kube-api-access-k4h9q\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.885922 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.887705 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.887778 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.989970 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4h9q\" (UniqueName: \"kubernetes.io/projected/6704fdd3-f589-4ccd-9a52-4a914e219b09-kube-api-access-k4h9q\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.990044 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.990064 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.990096 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.990138 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.990187 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.990219 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v5g6\" (UniqueName: \"kubernetes.io/projected/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-api-access-7v5g6\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.990266 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-config-data\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.994670 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-config-data\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.994987 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.995650 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:36 crc kubenswrapper[4845]: I0202 10:55:36.995773 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6704fdd3-f589-4ccd-9a52-4a914e219b09-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.002428 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.003232 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.018986 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v5g6\" (UniqueName: \"kubernetes.io/projected/f70412f5-a824-45b2-92c2-8e37a25d540a-kube-api-access-7v5g6\") pod \"kube-state-metrics-0\" (UID: \"f70412f5-a824-45b2-92c2-8e37a25d540a\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.026587 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4h9q\" (UniqueName: \"kubernetes.io/projected/6704fdd3-f589-4ccd-9a52-4a914e219b09-kube-api-access-k4h9q\") pod \"mysqld-exporter-0\" (UID: \"6704fdd3-f589-4ccd-9a52-4a914e219b09\") " pod="openstack/mysqld-exporter-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.035341 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.112085 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.598979 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62eefe51-d633-47bd-b7b8-1b786cc8bdde","Type":"ContainerStarted","Data":"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587"} Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.599300 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62eefe51-d633-47bd-b7b8-1b786cc8bdde","Type":"ContainerStarted","Data":"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34"} Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.599319 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62eefe51-d633-47bd-b7b8-1b786cc8bdde","Type":"ContainerStarted","Data":"3d15c9909f7d9d3d544fcb74bd77649d5815fc4550664c3c132f452b9ce91981"} Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.627483 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.631656 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.63164044 podStartE2EDuration="2.63164044s" podCreationTimestamp="2026-02-02 10:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:37.628317744 +0000 UTC m=+1418.719719214" watchObservedRunningTime="2026-02-02 10:55:37.63164044 +0000 UTC m=+1418.723041880" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.735300 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31" path="/var/lib/kubelet/pods/a7c0daea-a5c7-4695-bd4f-ad9a3aaf7d31/volumes" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.737618 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada4f3a2-2715-4c0c-bc32-5c488a2e1996" path="/var/lib/kubelet/pods/ada4f3a2-2715-4c0c-bc32-5c488a2e1996/volumes" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.746558 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.767470 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.767527 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.949858 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.949930 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 10:55:37 crc kubenswrapper[4845]: I0202 10:55:37.983739 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.228148 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.307599 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-nh2sl"] Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.307836 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" podUID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" containerName="dnsmasq-dns" containerID="cri-o://f6562082a819ade2f17a46123a0743170963965766d739159cebc380dae9a85f" gracePeriod=10 Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.446236 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.469876 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.470268 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="ceilometer-central-agent" containerID="cri-o://59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017" gracePeriod=30 Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.470627 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="proxy-httpd" containerID="cri-o://c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699" gracePeriod=30 Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.470794 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="ceilometer-notification-agent" containerID="cri-o://db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0" gracePeriod=30 Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.471041 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="sg-core" containerID="cri-o://66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358" gracePeriod=30 Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.638499 4845 generic.go:334] "Generic (PLEG): container finished" podID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerID="66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358" exitCode=2 Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.638577 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerDied","Data":"66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358"} Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.640618 4845 generic.go:334] "Generic (PLEG): container finished" podID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" containerID="f6562082a819ade2f17a46123a0743170963965766d739159cebc380dae9a85f" exitCode=0 Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.640674 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" event={"ID":"2ab561fd-1cd4-43c4-a09d-401ca966b4bb","Type":"ContainerDied","Data":"f6562082a819ade2f17a46123a0743170963965766d739159cebc380dae9a85f"} Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.645480 4845 generic.go:334] "Generic (PLEG): container finished" podID="7439e987-75e8-4cc8-840a-742c6f07dea9" containerID="7ebfe4b502f94b649860108409e8e9b93586764606d6f176206def30cf86a61d" exitCode=0 Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.645546 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-725jn" event={"ID":"7439e987-75e8-4cc8-840a-742c6f07dea9","Type":"ContainerDied","Data":"7ebfe4b502f94b649860108409e8e9b93586764606d6f176206def30cf86a61d"} Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.648654 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6704fdd3-f589-4ccd-9a52-4a914e219b09","Type":"ContainerStarted","Data":"eb6f676c13c56bf650964bf2919685e2373ea49300b6130e065763a1f64a68f1"} Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.651776 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f70412f5-a824-45b2-92c2-8e37a25d540a","Type":"ContainerStarted","Data":"4e42d1ddcc432477461c19616063b7119fe01ff4bd6ebccbbf31061234415ace"} Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.695595 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.854166 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:55:38 crc kubenswrapper[4845]: I0202 10:55:38.854757 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.020942 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.048065 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-sb\") pod \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.048135 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-svc\") pod \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.048193 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-nb\") pod \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.048335 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-config\") pod \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.048391 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-swift-storage-0\") pod \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.048471 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbcn2\" (UniqueName: \"kubernetes.io/projected/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-kube-api-access-sbcn2\") pod \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\" (UID: \"2ab561fd-1cd4-43c4-a09d-401ca966b4bb\") " Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.053471 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-kube-api-access-sbcn2" (OuterVolumeSpecName: "kube-api-access-sbcn2") pod "2ab561fd-1cd4-43c4-a09d-401ca966b4bb" (UID: "2ab561fd-1cd4-43c4-a09d-401ca966b4bb"). InnerVolumeSpecName "kube-api-access-sbcn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.152388 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbcn2\" (UniqueName: \"kubernetes.io/projected/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-kube-api-access-sbcn2\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.286170 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ab561fd-1cd4-43c4-a09d-401ca966b4bb" (UID: "2ab561fd-1cd4-43c4-a09d-401ca966b4bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.303003 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-config" (OuterVolumeSpecName: "config") pod "2ab561fd-1cd4-43c4-a09d-401ca966b4bb" (UID: "2ab561fd-1cd4-43c4-a09d-401ca966b4bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.304673 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ab561fd-1cd4-43c4-a09d-401ca966b4bb" (UID: "2ab561fd-1cd4-43c4-a09d-401ca966b4bb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.306657 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ab561fd-1cd4-43c4-a09d-401ca966b4bb" (UID: "2ab561fd-1cd4-43c4-a09d-401ca966b4bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.318068 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ab561fd-1cd4-43c4-a09d-401ca966b4bb" (UID: "2ab561fd-1cd4-43c4-a09d-401ca966b4bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.358173 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.358213 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.358224 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.358235 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.358249 4845 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ab561fd-1cd4-43c4-a09d-401ca966b4bb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.665667 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.665692 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-nh2sl" event={"ID":"2ab561fd-1cd4-43c4-a09d-401ca966b4bb","Type":"ContainerDied","Data":"250cda2944b48f437289aaa5905e90991c9b8dc1b5cb5593f83b10af7cbf343a"} Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.665767 4845 scope.go:117] "RemoveContainer" containerID="f6562082a819ade2f17a46123a0743170963965766d739159cebc380dae9a85f" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.667409 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6704fdd3-f589-4ccd-9a52-4a914e219b09","Type":"ContainerStarted","Data":"7bee02cd6eb60a2be51967ead29a3f60c6e67e4f6cb4d15d685a83a6496ac623"} Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.670147 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.670877 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f70412f5-a824-45b2-92c2-8e37a25d540a","Type":"ContainerStarted","Data":"0a5ef7131a1aa86b3afe64adf7aee4c9bd44c4d90ab9f37331cecd55bf74302a"} Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.676047 4845 generic.go:334] "Generic (PLEG): container finished" podID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerID="c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699" exitCode=0 Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.676085 4845 generic.go:334] "Generic (PLEG): container finished" podID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerID="59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017" exitCode=0 Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.676266 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerDied","Data":"c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699"} Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.676301 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerDied","Data":"59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017"} Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.693434 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.322513867 podStartE2EDuration="3.693415308s" podCreationTimestamp="2026-02-02 10:55:36 +0000 UTC" firstStartedPulling="2026-02-02 10:55:37.743951773 +0000 UTC m=+1418.835353223" lastFinishedPulling="2026-02-02 10:55:38.114853224 +0000 UTC m=+1419.206254664" observedRunningTime="2026-02-02 10:55:39.69036926 +0000 UTC m=+1420.781770730" watchObservedRunningTime="2026-02-02 10:55:39.693415308 +0000 UTC m=+1420.784816758" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.710227 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.009824553 podStartE2EDuration="3.710207311s" podCreationTimestamp="2026-02-02 10:55:36 +0000 UTC" firstStartedPulling="2026-02-02 10:55:37.633223565 +0000 UTC m=+1418.724625015" lastFinishedPulling="2026-02-02 10:55:38.333606323 +0000 UTC m=+1419.425007773" observedRunningTime="2026-02-02 10:55:39.706296659 +0000 UTC m=+1420.797698109" watchObservedRunningTime="2026-02-02 10:55:39.710207311 +0000 UTC m=+1420.801608761" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.728993 4845 scope.go:117] "RemoveContainer" containerID="47eae6a3cd68dd92b0b808c46cdd7b757872d2c3608874a20759716d81ea4849" Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.741125 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-nh2sl"] Feb 02 10:55:39 crc kubenswrapper[4845]: I0202 10:55:39.752905 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-nh2sl"] Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.240660 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.382053 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-282l4\" (UniqueName: \"kubernetes.io/projected/7439e987-75e8-4cc8-840a-742c6f07dea9-kube-api-access-282l4\") pod \"7439e987-75e8-4cc8-840a-742c6f07dea9\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.382156 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-config-data\") pod \"7439e987-75e8-4cc8-840a-742c6f07dea9\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.382237 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-scripts\") pod \"7439e987-75e8-4cc8-840a-742c6f07dea9\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.382319 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-combined-ca-bundle\") pod \"7439e987-75e8-4cc8-840a-742c6f07dea9\" (UID: \"7439e987-75e8-4cc8-840a-742c6f07dea9\") " Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.391042 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-scripts" (OuterVolumeSpecName: "scripts") pod "7439e987-75e8-4cc8-840a-742c6f07dea9" (UID: "7439e987-75e8-4cc8-840a-742c6f07dea9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.418409 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7439e987-75e8-4cc8-840a-742c6f07dea9-kube-api-access-282l4" (OuterVolumeSpecName: "kube-api-access-282l4") pod "7439e987-75e8-4cc8-840a-742c6f07dea9" (UID: "7439e987-75e8-4cc8-840a-742c6f07dea9"). InnerVolumeSpecName "kube-api-access-282l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.426601 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-config-data" (OuterVolumeSpecName: "config-data") pod "7439e987-75e8-4cc8-840a-742c6f07dea9" (UID: "7439e987-75e8-4cc8-840a-742c6f07dea9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.454416 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7439e987-75e8-4cc8-840a-742c6f07dea9" (UID: "7439e987-75e8-4cc8-840a-742c6f07dea9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.485479 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.485510 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.485523 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-282l4\" (UniqueName: \"kubernetes.io/projected/7439e987-75e8-4cc8-840a-742c6f07dea9-kube-api-access-282l4\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.485533 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7439e987-75e8-4cc8-840a-742c6f07dea9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.694489 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-725jn" event={"ID":"7439e987-75e8-4cc8-840a-742c6f07dea9","Type":"ContainerDied","Data":"b69a35d9b6473cf3a175810e25cf4dd4eb5c8693efc5afa386457c19bc88440b"} Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.694898 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69a35d9b6473cf3a175810e25cf4dd4eb5c8693efc5afa386457c19bc88440b" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.694987 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-725jn" Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.701751 4845 generic.go:334] "Generic (PLEG): container finished" podID="7b2bad3a-8153-41d8-83f6-9f9caa16589b" containerID="f926f2a33b190bee678900de2f6fcfec869f9b6aacd707a0eedb2404459e01e7" exitCode=0 Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.701834 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" event={"ID":"7b2bad3a-8153-41d8-83f6-9f9caa16589b","Type":"ContainerDied","Data":"f926f2a33b190bee678900de2f6fcfec869f9b6aacd707a0eedb2404459e01e7"} Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.860624 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.860985 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-api" containerID="cri-o://7371036457c908a60d98a53f34d54f0d70618efbebcf97fbc3e3f1b041ae7110" gracePeriod=30 Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.861000 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-log" containerID="cri-o://54419ef52d04b18470afc9cd9fe1e7776568928b396efb56b1f79342767e7b05" gracePeriod=30 Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.883069 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.883396 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="516e4c98-314a-4116-b0fc-45c18fd1c7e1" containerName="nova-scheduler-scheduler" containerID="cri-o://86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22" gracePeriod=30 Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.897624 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.897942 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerName="nova-metadata-log" containerID="cri-o://dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34" gracePeriod=30 Feb 02 10:55:40 crc kubenswrapper[4845]: I0202 10:55:40.898038 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerName="nova-metadata-metadata" containerID="cri-o://5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587" gracePeriod=30 Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.105120 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.105206 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.698714 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.705399 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.727962 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" path="/var/lib/kubelet/pods/2ab561fd-1cd4-43c4-a09d-401ca966b4bb/volumes" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.729925 4845 generic.go:334] "Generic (PLEG): container finished" podID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerID="5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587" exitCode=0 Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.729957 4845 generic.go:334] "Generic (PLEG): container finished" podID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerID="dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34" exitCode=143 Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.730081 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.734096 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62eefe51-d633-47bd-b7b8-1b786cc8bdde","Type":"ContainerDied","Data":"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587"} Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.734138 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62eefe51-d633-47bd-b7b8-1b786cc8bdde","Type":"ContainerDied","Data":"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34"} Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.734149 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62eefe51-d633-47bd-b7b8-1b786cc8bdde","Type":"ContainerDied","Data":"3d15c9909f7d9d3d544fcb74bd77649d5815fc4550664c3c132f452b9ce91981"} Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.734175 4845 scope.go:117] "RemoveContainer" containerID="5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.756816 4845 generic.go:334] "Generic (PLEG): container finished" podID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerID="54419ef52d04b18470afc9cd9fe1e7776568928b396efb56b1f79342767e7b05" exitCode=143 Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.756932 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1498a0e1-1035-4eba-bbc5-169cd1de86a0","Type":"ContainerDied","Data":"54419ef52d04b18470afc9cd9fe1e7776568928b396efb56b1f79342767e7b05"} Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.760857 4845 generic.go:334] "Generic (PLEG): container finished" podID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerID="db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0" exitCode=0 Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.761178 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.761709 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerDied","Data":"db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0"} Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.761749 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c009cae-0016-4d35-9773-1e313feb5c4a","Type":"ContainerDied","Data":"4a2f52662f64ac4adafea886384b8aff903812b748685ed30b174caef644d27d"} Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.807702 4845 scope.go:117] "RemoveContainer" containerID="dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.819736 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-run-httpd\") pod \"1c009cae-0016-4d35-9773-1e313feb5c4a\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.819781 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-config-data\") pod \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.819853 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8rlf\" (UniqueName: \"kubernetes.io/projected/62eefe51-d633-47bd-b7b8-1b786cc8bdde-kube-api-access-v8rlf\") pod \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.819959 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bknng\" (UniqueName: \"kubernetes.io/projected/1c009cae-0016-4d35-9773-1e313feb5c4a-kube-api-access-bknng\") pod \"1c009cae-0016-4d35-9773-1e313feb5c4a\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.820008 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-scripts\") pod \"1c009cae-0016-4d35-9773-1e313feb5c4a\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.820028 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-combined-ca-bundle\") pod \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.820128 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-sg-core-conf-yaml\") pod \"1c009cae-0016-4d35-9773-1e313feb5c4a\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.820162 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-combined-ca-bundle\") pod \"1c009cae-0016-4d35-9773-1e313feb5c4a\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.820186 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-nova-metadata-tls-certs\") pod \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.820205 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eefe51-d633-47bd-b7b8-1b786cc8bdde-logs\") pod \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\" (UID: \"62eefe51-d633-47bd-b7b8-1b786cc8bdde\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.820308 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-log-httpd\") pod \"1c009cae-0016-4d35-9773-1e313feb5c4a\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.820353 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-config-data\") pod \"1c009cae-0016-4d35-9773-1e313feb5c4a\" (UID: \"1c009cae-0016-4d35-9773-1e313feb5c4a\") " Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.821873 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1c009cae-0016-4d35-9773-1e313feb5c4a" (UID: "1c009cae-0016-4d35-9773-1e313feb5c4a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.824312 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62eefe51-d633-47bd-b7b8-1b786cc8bdde-logs" (OuterVolumeSpecName: "logs") pod "62eefe51-d633-47bd-b7b8-1b786cc8bdde" (UID: "62eefe51-d633-47bd-b7b8-1b786cc8bdde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.826077 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1c009cae-0016-4d35-9773-1e313feb5c4a" (UID: "1c009cae-0016-4d35-9773-1e313feb5c4a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.831134 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62eefe51-d633-47bd-b7b8-1b786cc8bdde-kube-api-access-v8rlf" (OuterVolumeSpecName: "kube-api-access-v8rlf") pod "62eefe51-d633-47bd-b7b8-1b786cc8bdde" (UID: "62eefe51-d633-47bd-b7b8-1b786cc8bdde"). InnerVolumeSpecName "kube-api-access-v8rlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.835420 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-scripts" (OuterVolumeSpecName: "scripts") pod "1c009cae-0016-4d35-9773-1e313feb5c4a" (UID: "1c009cae-0016-4d35-9773-1e313feb5c4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.836865 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c009cae-0016-4d35-9773-1e313feb5c4a-kube-api-access-bknng" (OuterVolumeSpecName: "kube-api-access-bknng") pod "1c009cae-0016-4d35-9773-1e313feb5c4a" (UID: "1c009cae-0016-4d35-9773-1e313feb5c4a"). InnerVolumeSpecName "kube-api-access-bknng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.850381 4845 scope.go:117] "RemoveContainer" containerID="5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587" Feb 02 10:55:41 crc kubenswrapper[4845]: E0202 10:55:41.851901 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587\": container with ID starting with 5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587 not found: ID does not exist" containerID="5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.851942 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587"} err="failed to get container status \"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587\": rpc error: code = NotFound desc = could not find container \"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587\": container with ID starting with 5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587 not found: ID does not exist" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.851975 4845 scope.go:117] "RemoveContainer" containerID="dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34" Feb 02 10:55:41 crc kubenswrapper[4845]: E0202 10:55:41.854484 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34\": container with ID starting with dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34 not found: ID does not exist" containerID="dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.854524 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34"} err="failed to get container status \"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34\": rpc error: code = NotFound desc = could not find container \"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34\": container with ID starting with dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34 not found: ID does not exist" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.854555 4845 scope.go:117] "RemoveContainer" containerID="5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.856722 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587"} err="failed to get container status \"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587\": rpc error: code = NotFound desc = could not find container \"5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587\": container with ID starting with 5129ed6969a185dc174054937b32f44ecf2b35e06649ff5063fa9400e0bbb587 not found: ID does not exist" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.856767 4845 scope.go:117] "RemoveContainer" containerID="dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.879127 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-config-data" (OuterVolumeSpecName: "config-data") pod "62eefe51-d633-47bd-b7b8-1b786cc8bdde" (UID: "62eefe51-d633-47bd-b7b8-1b786cc8bdde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.879126 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34"} err="failed to get container status \"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34\": rpc error: code = NotFound desc = could not find container \"dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34\": container with ID starting with dc81d950bd12c8d88724602378669183e198dbdb561da383063fc6ae16648d34 not found: ID does not exist" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.879194 4845 scope.go:117] "RemoveContainer" containerID="c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.889512 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "62eefe51-d633-47bd-b7b8-1b786cc8bdde" (UID: "62eefe51-d633-47bd-b7b8-1b786cc8bdde"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.891868 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1c009cae-0016-4d35-9773-1e313feb5c4a" (UID: "1c009cae-0016-4d35-9773-1e313feb5c4a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.894336 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62eefe51-d633-47bd-b7b8-1b786cc8bdde" (UID: "62eefe51-d633-47bd-b7b8-1b786cc8bdde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.922865 4845 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.922914 4845 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.922927 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eefe51-d633-47bd-b7b8-1b786cc8bdde-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.922939 4845 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.922950 4845 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c009cae-0016-4d35-9773-1e313feb5c4a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.922978 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.922989 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8rlf\" (UniqueName: \"kubernetes.io/projected/62eefe51-d633-47bd-b7b8-1b786cc8bdde-kube-api-access-v8rlf\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.923000 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bknng\" (UniqueName: \"kubernetes.io/projected/1c009cae-0016-4d35-9773-1e313feb5c4a-kube-api-access-bknng\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.923010 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.923020 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eefe51-d633-47bd-b7b8-1b786cc8bdde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.939642 4845 scope.go:117] "RemoveContainer" containerID="66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.952476 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c009cae-0016-4d35-9773-1e313feb5c4a" (UID: "1c009cae-0016-4d35-9773-1e313feb5c4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.976120 4845 scope.go:117] "RemoveContainer" containerID="db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0" Feb 02 10:55:41 crc kubenswrapper[4845]: I0202 10:55:41.986039 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-config-data" (OuterVolumeSpecName: "config-data") pod "1c009cae-0016-4d35-9773-1e313feb5c4a" (UID: "1c009cae-0016-4d35-9773-1e313feb5c4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.019414 4845 scope.go:117] "RemoveContainer" containerID="59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.025413 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.025446 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c009cae-0016-4d35-9773-1e313feb5c4a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.056766 4845 scope.go:117] "RemoveContainer" containerID="c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.057299 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699\": container with ID starting with c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699 not found: ID does not exist" containerID="c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.057353 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699"} err="failed to get container status \"c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699\": rpc error: code = NotFound desc = could not find container \"c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699\": container with ID starting with c4da32ef9938e3a4eb7fa613a519402a486d73d59fbd5be714815e7741131699 not found: ID does not exist" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.057388 4845 scope.go:117] "RemoveContainer" containerID="66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.057643 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358\": container with ID starting with 66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358 not found: ID does not exist" containerID="66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.057676 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358"} err="failed to get container status \"66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358\": rpc error: code = NotFound desc = could not find container \"66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358\": container with ID starting with 66ceb0e8f2a60f19d2266cd291bfbd6bc612a260193a33a15be90f73d2aea358 not found: ID does not exist" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.057693 4845 scope.go:117] "RemoveContainer" containerID="db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.057911 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0\": container with ID starting with db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0 not found: ID does not exist" containerID="db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.057941 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0"} err="failed to get container status \"db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0\": rpc error: code = NotFound desc = could not find container \"db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0\": container with ID starting with db12f045f249fa1f64b095823c8181eff018f6d148c60c7badb2c2c6744d12c0 not found: ID does not exist" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.057958 4845 scope.go:117] "RemoveContainer" containerID="59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.058187 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017\": container with ID starting with 59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017 not found: ID does not exist" containerID="59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.058218 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017"} err="failed to get container status \"59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017\": rpc error: code = NotFound desc = could not find container \"59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017\": container with ID starting with 59afef34d3f2b508eb95da3a2f7eae6630ea046d1baa6181ce4bd5b6934bb017 not found: ID does not exist" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.077822 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.097650 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.125576 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.172281 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.173919 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="ceilometer-central-agent" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.173935 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="ceilometer-central-agent" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.173958 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2bad3a-8153-41d8-83f6-9f9caa16589b" containerName="nova-cell1-conductor-db-sync" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.173965 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2bad3a-8153-41d8-83f6-9f9caa16589b" containerName="nova-cell1-conductor-db-sync" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.173983 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="sg-core" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.173989 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="sg-core" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.174018 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" containerName="dnsmasq-dns" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.174024 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" containerName="dnsmasq-dns" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.174038 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7439e987-75e8-4cc8-840a-742c6f07dea9" containerName="nova-manage" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.174044 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7439e987-75e8-4cc8-840a-742c6f07dea9" containerName="nova-manage" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.174058 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerName="nova-metadata-log" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.174064 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerName="nova-metadata-log" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.174081 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="proxy-httpd" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.174086 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="proxy-httpd" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.174103 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" containerName="init" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.174108 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" containerName="init" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.174127 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="ceilometer-notification-agent" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.174133 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="ceilometer-notification-agent" Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.174158 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerName="nova-metadata-metadata" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.174163 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerName="nova-metadata-metadata" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176596 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="sg-core" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176640 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab561fd-1cd4-43c4-a09d-401ca966b4bb" containerName="dnsmasq-dns" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176663 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerName="nova-metadata-metadata" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176677 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="ceilometer-notification-agent" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176697 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7439e987-75e8-4cc8-840a-742c6f07dea9" containerName="nova-manage" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176720 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" containerName="nova-metadata-log" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176741 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="ceilometer-central-agent" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176759 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2bad3a-8153-41d8-83f6-9f9caa16589b" containerName="nova-cell1-conductor-db-sync" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.176780 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" containerName="proxy-httpd" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.178787 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.180789 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.182060 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.190948 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.203273 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.222347 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.237240 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-config-data\") pod \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.237484 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7jh8\" (UniqueName: \"kubernetes.io/projected/7b2bad3a-8153-41d8-83f6-9f9caa16589b-kube-api-access-l7jh8\") pod \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.237536 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-scripts\") pod \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.237635 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-combined-ca-bundle\") pod \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\" (UID: \"7b2bad3a-8153-41d8-83f6-9f9caa16589b\") " Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.241735 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2bad3a-8153-41d8-83f6-9f9caa16589b-kube-api-access-l7jh8" (OuterVolumeSpecName: "kube-api-access-l7jh8") pod "7b2bad3a-8153-41d8-83f6-9f9caa16589b" (UID: "7b2bad3a-8153-41d8-83f6-9f9caa16589b"). InnerVolumeSpecName "kube-api-access-l7jh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.242614 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-scripts" (OuterVolumeSpecName: "scripts") pod "7b2bad3a-8153-41d8-83f6-9f9caa16589b" (UID: "7b2bad3a-8153-41d8-83f6-9f9caa16589b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.245196 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.248099 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.253826 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.255589 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.259596 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.262094 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.285447 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-config-data" (OuterVolumeSpecName: "config-data") pod "7b2bad3a-8153-41d8-83f6-9f9caa16589b" (UID: "7b2bad3a-8153-41d8-83f6-9f9caa16589b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.289076 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b2bad3a-8153-41d8-83f6-9f9caa16589b" (UID: "7b2bad3a-8153-41d8-83f6-9f9caa16589b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.340155 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.340263 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-config-data\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.340548 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-log-httpd\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.340611 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.340647 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.340760 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-scripts\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.340954 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab48fc91-e9f1-4362-8cb8-091846601a7e-logs\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341124 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlh7l\" (UniqueName: \"kubernetes.io/projected/ab48fc91-e9f1-4362-8cb8-091846601a7e-kube-api-access-zlh7l\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341229 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-config-data\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341330 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341380 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-run-httpd\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341461 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341510 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xlk\" (UniqueName: \"kubernetes.io/projected/c8162bea-daa1-42e3-8921-3c12ad56dfa6-kube-api-access-44xlk\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341633 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7jh8\" (UniqueName: \"kubernetes.io/projected/7b2bad3a-8153-41d8-83f6-9f9caa16589b-kube-api-access-l7jh8\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341649 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341662 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.341675 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2bad3a-8153-41d8-83f6-9f9caa16589b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.443125 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.443197 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44xlk\" (UniqueName: \"kubernetes.io/projected/c8162bea-daa1-42e3-8921-3c12ad56dfa6-kube-api-access-44xlk\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.443253 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.443311 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-config-data\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.443474 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-log-httpd\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.443509 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.443538 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.443588 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-scripts\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.444513 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-log-httpd\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.444720 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab48fc91-e9f1-4362-8cb8-091846601a7e-logs\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.444858 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlh7l\" (UniqueName: \"kubernetes.io/projected/ab48fc91-e9f1-4362-8cb8-091846601a7e-kube-api-access-zlh7l\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.444964 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-config-data\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.445047 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.445090 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-run-httpd\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.445409 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab48fc91-e9f1-4362-8cb8-091846601a7e-logs\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.445580 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-run-httpd\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.448470 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.448634 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-scripts\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.449511 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-config-data\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.449971 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.451685 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.452429 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-config-data\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.452460 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.455657 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.464217 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44xlk\" (UniqueName: \"kubernetes.io/projected/c8162bea-daa1-42e3-8921-3c12ad56dfa6-kube-api-access-44xlk\") pod \"ceilometer-0\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.470423 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlh7l\" (UniqueName: \"kubernetes.io/projected/ab48fc91-e9f1-4362-8cb8-091846601a7e-kube-api-access-zlh7l\") pod \"nova-metadata-0\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.507574 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.570866 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.780070 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" event={"ID":"7b2bad3a-8153-41d8-83f6-9f9caa16589b","Type":"ContainerDied","Data":"da194489804af83cf7bd974e17436580f313625e8bbc411b3359df8752a517c2"} Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.780288 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da194489804af83cf7bd974e17436580f313625e8bbc411b3359df8752a517c2" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.780194 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-r4lj7" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.795066 4845 generic.go:334] "Generic (PLEG): container finished" podID="516e4c98-314a-4116-b0fc-45c18fd1c7e1" containerID="86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22" exitCode=0 Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.795196 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"516e4c98-314a-4116-b0fc-45c18fd1c7e1","Type":"ContainerDied","Data":"86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22"} Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.860384 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.862111 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.865084 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.870359 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.956544 4845 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22 is running failed: container process not found" containerID="86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.956967 4845 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22 is running failed: container process not found" containerID="86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.957297 4845 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22 is running failed: container process not found" containerID="86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:55:42 crc kubenswrapper[4845]: E0202 10:55:42.957331 4845 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="516e4c98-314a-4116-b0fc-45c18fd1c7e1" containerName="nova-scheduler-scheduler" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.958071 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nxz\" (UniqueName: \"kubernetes.io/projected/039d1d72-0f72-4172-a037-ea289c8d7fbb-kube-api-access-p8nxz\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.958194 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/039d1d72-0f72-4172-a037-ea289c8d7fbb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:42 crc kubenswrapper[4845]: I0202 10:55:42.958242 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d1d72-0f72-4172-a037-ea289c8d7fbb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.060258 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/039d1d72-0f72-4172-a037-ea289c8d7fbb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.060310 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d1d72-0f72-4172-a037-ea289c8d7fbb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.060546 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8nxz\" (UniqueName: \"kubernetes.io/projected/039d1d72-0f72-4172-a037-ea289c8d7fbb-kube-api-access-p8nxz\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.066250 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/039d1d72-0f72-4172-a037-ea289c8d7fbb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.066729 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d1d72-0f72-4172-a037-ea289c8d7fbb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.082530 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8nxz\" (UniqueName: \"kubernetes.io/projected/039d1d72-0f72-4172-a037-ea289c8d7fbb-kube-api-access-p8nxz\") pod \"nova-cell1-conductor-0\" (UID: \"039d1d72-0f72-4172-a037-ea289c8d7fbb\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.135417 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:55:43 crc kubenswrapper[4845]: W0202 10:55:43.140980 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab48fc91_e9f1_4362_8cb8_091846601a7e.slice/crio-03916b40dcfb1d58e9a1856cf115e52af0968b63f0ef6c849ed5ef4cac5ddc0d WatchSource:0}: Error finding container 03916b40dcfb1d58e9a1856cf115e52af0968b63f0ef6c849ed5ef4cac5ddc0d: Status 404 returned error can't find the container with id 03916b40dcfb1d58e9a1856cf115e52af0968b63f0ef6c849ed5ef4cac5ddc0d Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.191393 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.245230 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:55:43 crc kubenswrapper[4845]: W0202 10:55:43.249729 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8162bea_daa1_42e3_8921_3c12ad56dfa6.slice/crio-2275bc3cbd2ca4cb0365fb35199c6ba0dd53592c51fe579e48d78b478ab3a151 WatchSource:0}: Error finding container 2275bc3cbd2ca4cb0365fb35199c6ba0dd53592c51fe579e48d78b478ab3a151: Status 404 returned error can't find the container with id 2275bc3cbd2ca4cb0365fb35199c6ba0dd53592c51fe579e48d78b478ab3a151 Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.258432 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.367851 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-combined-ca-bundle\") pod \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.368410 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gchv8\" (UniqueName: \"kubernetes.io/projected/516e4c98-314a-4116-b0fc-45c18fd1c7e1-kube-api-access-gchv8\") pod \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.368866 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-config-data\") pod \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\" (UID: \"516e4c98-314a-4116-b0fc-45c18fd1c7e1\") " Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.377072 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516e4c98-314a-4116-b0fc-45c18fd1c7e1-kube-api-access-gchv8" (OuterVolumeSpecName: "kube-api-access-gchv8") pod "516e4c98-314a-4116-b0fc-45c18fd1c7e1" (UID: "516e4c98-314a-4116-b0fc-45c18fd1c7e1"). InnerVolumeSpecName "kube-api-access-gchv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.406813 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "516e4c98-314a-4116-b0fc-45c18fd1c7e1" (UID: "516e4c98-314a-4116-b0fc-45c18fd1c7e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.422449 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-config-data" (OuterVolumeSpecName: "config-data") pod "516e4c98-314a-4116-b0fc-45c18fd1c7e1" (UID: "516e4c98-314a-4116-b0fc-45c18fd1c7e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.473010 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gchv8\" (UniqueName: \"kubernetes.io/projected/516e4c98-314a-4116-b0fc-45c18fd1c7e1-kube-api-access-gchv8\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.473214 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.473310 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e4c98-314a-4116-b0fc-45c18fd1c7e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.744865 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c009cae-0016-4d35-9773-1e313feb5c4a" path="/var/lib/kubelet/pods/1c009cae-0016-4d35-9773-1e313feb5c4a/volumes" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.747156 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62eefe51-d633-47bd-b7b8-1b786cc8bdde" path="/var/lib/kubelet/pods/62eefe51-d633-47bd-b7b8-1b786cc8bdde/volumes" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.748238 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.819626 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerStarted","Data":"2275bc3cbd2ca4cb0365fb35199c6ba0dd53592c51fe579e48d78b478ab3a151"} Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.821987 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.821986 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"516e4c98-314a-4116-b0fc-45c18fd1c7e1","Type":"ContainerDied","Data":"ad10cf81d533e8b37ef72f4ca4f01fc9f32600963e1feb8c42617710f205cfdd"} Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.822140 4845 scope.go:117] "RemoveContainer" containerID="86703aa66625a53c7ebcc31b1564a8020231b6947bd8876cc1af251a9048bf22" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.825183 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab48fc91-e9f1-4362-8cb8-091846601a7e","Type":"ContainerStarted","Data":"1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d"} Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.825231 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab48fc91-e9f1-4362-8cb8-091846601a7e","Type":"ContainerStarted","Data":"d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79"} Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.825247 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab48fc91-e9f1-4362-8cb8-091846601a7e","Type":"ContainerStarted","Data":"03916b40dcfb1d58e9a1856cf115e52af0968b63f0ef6c849ed5ef4cac5ddc0d"} Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.827714 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"039d1d72-0f72-4172-a037-ea289c8d7fbb","Type":"ContainerStarted","Data":"82ad75b278c8ed413545045dd321895af2d2e947bd739995535cc4a4f30a4b1d"} Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.866720 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.866667914 podStartE2EDuration="1.866667914s" podCreationTimestamp="2026-02-02 10:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:43.844053143 +0000 UTC m=+1424.935454593" watchObservedRunningTime="2026-02-02 10:55:43.866667914 +0000 UTC m=+1424.958069364" Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.938666 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:43 crc kubenswrapper[4845]: I0202 10:55:43.981964 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.003566 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:44 crc kubenswrapper[4845]: E0202 10:55:44.004254 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516e4c98-314a-4116-b0fc-45c18fd1c7e1" containerName="nova-scheduler-scheduler" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.004279 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="516e4c98-314a-4116-b0fc-45c18fd1c7e1" containerName="nova-scheduler-scheduler" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.004679 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="516e4c98-314a-4116-b0fc-45c18fd1c7e1" containerName="nova-scheduler-scheduler" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.006072 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.011954 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.017630 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.105519 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-config-data\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.105976 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnk77\" (UniqueName: \"kubernetes.io/projected/652d8576-912d-4384-b487-aa6b987b567f-kube-api-access-vnk77\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.106077 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.209249 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-config-data\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.209313 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnk77\" (UniqueName: \"kubernetes.io/projected/652d8576-912d-4384-b487-aa6b987b567f-kube-api-access-vnk77\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.209379 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.213910 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.217483 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-config-data\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.233914 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnk77\" (UniqueName: \"kubernetes.io/projected/652d8576-912d-4384-b487-aa6b987b567f-kube-api-access-vnk77\") pod \"nova-scheduler-0\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.331498 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.839157 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"039d1d72-0f72-4172-a037-ea289c8d7fbb","Type":"ContainerStarted","Data":"6a3f53c8d4d64b0fbda22be0ce7ca0c2081aa58ea492ba52ffebc44dcbc5dae9"} Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.841234 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.845724 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerStarted","Data":"2eb041a66024bd6fdf0a4047a860f057456d28471e9ac67391405774838038c1"} Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.872072 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.872028746 podStartE2EDuration="2.872028746s" podCreationTimestamp="2026-02-02 10:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:44.858751549 +0000 UTC m=+1425.950152999" watchObservedRunningTime="2026-02-02 10:55:44.872028746 +0000 UTC m=+1425.963430196" Feb 02 10:55:44 crc kubenswrapper[4845]: I0202 10:55:44.936032 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.731043 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516e4c98-314a-4116-b0fc-45c18fd1c7e1" path="/var/lib/kubelet/pods/516e4c98-314a-4116-b0fc-45c18fd1c7e1/volumes" Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.861927 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerStarted","Data":"4e6899cd823b95ed877f86a8c1dd20bcde7abe8ab4e5dac19c5ab5f2085fe814"} Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.861983 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerStarted","Data":"e82a8977aa57e2cfa579b4996bfd47a417b1401100d516e3057e129ade68d451"} Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.865121 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"652d8576-912d-4384-b487-aa6b987b567f","Type":"ContainerStarted","Data":"444f6ac6315562315a504c0600f21ab36cd45bf454f4002a51c1be86e2c6f5bb"} Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.865186 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"652d8576-912d-4384-b487-aa6b987b567f","Type":"ContainerStarted","Data":"9d162ecc4fb3460600a969e51ca75c36688e2c03aaec73046e664f22f56be6a6"} Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.870212 4845 generic.go:334] "Generic (PLEG): container finished" podID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerID="7371036457c908a60d98a53f34d54f0d70618efbebcf97fbc3e3f1b041ae7110" exitCode=0 Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.871556 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1498a0e1-1035-4eba-bbc5-169cd1de86a0","Type":"ContainerDied","Data":"7371036457c908a60d98a53f34d54f0d70618efbebcf97fbc3e3f1b041ae7110"} Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.871827 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1498a0e1-1035-4eba-bbc5-169cd1de86a0","Type":"ContainerDied","Data":"f57f9e17a253f3027639c6870894363d5912e86c0ad65390204a822d4d8ff33a"} Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.872142 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f57f9e17a253f3027639c6870894363d5912e86c0ad65390204a822d4d8ff33a" Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.881541 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.888917 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.888874904 podStartE2EDuration="2.888874904s" podCreationTimestamp="2026-02-02 10:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:45.884251219 +0000 UTC m=+1426.975652669" watchObservedRunningTime="2026-02-02 10:55:45.888874904 +0000 UTC m=+1426.980276354" Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.952154 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1498a0e1-1035-4eba-bbc5-169cd1de86a0-logs\") pod \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.952289 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-config-data\") pod \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.952308 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-combined-ca-bundle\") pod \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.952413 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgljn\" (UniqueName: \"kubernetes.io/projected/1498a0e1-1035-4eba-bbc5-169cd1de86a0-kube-api-access-zgljn\") pod \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\" (UID: \"1498a0e1-1035-4eba-bbc5-169cd1de86a0\") " Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.953181 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1498a0e1-1035-4eba-bbc5-169cd1de86a0-logs" (OuterVolumeSpecName: "logs") pod "1498a0e1-1035-4eba-bbc5-169cd1de86a0" (UID: "1498a0e1-1035-4eba-bbc5-169cd1de86a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:45 crc kubenswrapper[4845]: I0202 10:55:45.976914 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1498a0e1-1035-4eba-bbc5-169cd1de86a0-kube-api-access-zgljn" (OuterVolumeSpecName: "kube-api-access-zgljn") pod "1498a0e1-1035-4eba-bbc5-169cd1de86a0" (UID: "1498a0e1-1035-4eba-bbc5-169cd1de86a0"). InnerVolumeSpecName "kube-api-access-zgljn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:45.999777 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1498a0e1-1035-4eba-bbc5-169cd1de86a0" (UID: "1498a0e1-1035-4eba-bbc5-169cd1de86a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.006066 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-config-data" (OuterVolumeSpecName: "config-data") pod "1498a0e1-1035-4eba-bbc5-169cd1de86a0" (UID: "1498a0e1-1035-4eba-bbc5-169cd1de86a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.055579 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgljn\" (UniqueName: \"kubernetes.io/projected/1498a0e1-1035-4eba-bbc5-169cd1de86a0-kube-api-access-zgljn\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.055621 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1498a0e1-1035-4eba-bbc5-169cd1de86a0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.055635 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.055647 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1498a0e1-1035-4eba-bbc5-169cd1de86a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.882008 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.925135 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.938735 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.962677 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:46 crc kubenswrapper[4845]: E0202 10:55:46.963362 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-api" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.963384 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-api" Feb 02 10:55:46 crc kubenswrapper[4845]: E0202 10:55:46.963399 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-log" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.963407 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-log" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.963650 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-log" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.963709 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" containerName="nova-api-api" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.965128 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.968640 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 10:55:46 crc kubenswrapper[4845]: I0202 10:55:46.988948 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.083380 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-config-data\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.083732 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m98z8\" (UniqueName: \"kubernetes.io/projected/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-kube-api-access-m98z8\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.083933 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-logs\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.084002 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.125476 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.185990 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-logs\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.186108 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.186247 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-config-data\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.186312 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m98z8\" (UniqueName: \"kubernetes.io/projected/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-kube-api-access-m98z8\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.191927 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.192051 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-logs\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.215525 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m98z8\" (UniqueName: \"kubernetes.io/projected/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-kube-api-access-m98z8\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.235173 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-config-data\") pod \"nova-api-0\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.297192 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.509829 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.511796 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.727580 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1498a0e1-1035-4eba-bbc5-169cd1de86a0" path="/var/lib/kubelet/pods/1498a0e1-1035-4eba-bbc5-169cd1de86a0/volumes" Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.799764 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.896443 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55","Type":"ContainerStarted","Data":"07a8bb175c6ab486cf8171570281ab764339f6cc80042312c0c66b5beebb4313"} Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.902241 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerStarted","Data":"6656f047e77f8e3107d54021260879721254382d799001121bd164ebae15d9c9"} Feb 02 10:55:47 crc kubenswrapper[4845]: I0202 10:55:47.902308 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:55:48 crc kubenswrapper[4845]: I0202 10:55:48.915408 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55","Type":"ContainerStarted","Data":"0bac2a8d77845121bf798ed7e7bbbcdf0da36dc5492619b8084fe476f2575739"} Feb 02 10:55:48 crc kubenswrapper[4845]: I0202 10:55:48.916041 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55","Type":"ContainerStarted","Data":"b18cfab9ee8f5f4276f99dca37c12a86dbf672950e6b072ff69b9e4ffc29f67d"} Feb 02 10:55:48 crc kubenswrapper[4845]: I0202 10:55:48.941853 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.950177499 podStartE2EDuration="6.941829568s" podCreationTimestamp="2026-02-02 10:55:42 +0000 UTC" firstStartedPulling="2026-02-02 10:55:43.2525119 +0000 UTC m=+1424.343913360" lastFinishedPulling="2026-02-02 10:55:47.244163979 +0000 UTC m=+1428.335565429" observedRunningTime="2026-02-02 10:55:47.926396931 +0000 UTC m=+1429.017798391" watchObservedRunningTime="2026-02-02 10:55:48.941829568 +0000 UTC m=+1430.033231018" Feb 02 10:55:48 crc kubenswrapper[4845]: I0202 10:55:48.945305 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.945287639 podStartE2EDuration="2.945287639s" podCreationTimestamp="2026-02-02 10:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:48.932737403 +0000 UTC m=+1430.024138853" watchObservedRunningTime="2026-02-02 10:55:48.945287639 +0000 UTC m=+1430.036689099" Feb 02 10:55:49 crc kubenswrapper[4845]: I0202 10:55:49.332328 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 10:55:52 crc kubenswrapper[4845]: I0202 10:55:52.507923 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 10:55:52 crc kubenswrapper[4845]: I0202 10:55:52.510081 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 10:55:53 crc kubenswrapper[4845]: I0202 10:55:53.224370 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 10:55:53 crc kubenswrapper[4845]: I0202 10:55:53.520071 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.249:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:55:53 crc kubenswrapper[4845]: I0202 10:55:53.520082 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.249:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 10:55:54 crc kubenswrapper[4845]: I0202 10:55:54.332664 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 10:55:54 crc kubenswrapper[4845]: I0202 10:55:54.366276 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 10:55:55 crc kubenswrapper[4845]: I0202 10:55:55.026541 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 10:55:57 crc kubenswrapper[4845]: I0202 10:55:57.298052 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:55:57 crc kubenswrapper[4845]: I0202 10:55:57.298569 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:55:58 crc kubenswrapper[4845]: I0202 10:55:58.381278 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:55:58 crc kubenswrapper[4845]: I0202 10:55:58.381295 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:56:02 crc kubenswrapper[4845]: I0202 10:56:02.513365 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 10:56:02 crc kubenswrapper[4845]: I0202 10:56:02.514181 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 10:56:02 crc kubenswrapper[4845]: I0202 10:56:02.521557 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 10:56:02 crc kubenswrapper[4845]: I0202 10:56:02.521751 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 10:56:04 crc kubenswrapper[4845]: I0202 10:56:04.911011 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.062037 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7gzv\" (UniqueName: \"kubernetes.io/projected/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-kube-api-access-p7gzv\") pod \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.062381 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-combined-ca-bundle\") pod \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.062792 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-config-data\") pod \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\" (UID: \"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98\") " Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.071585 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-kube-api-access-p7gzv" (OuterVolumeSpecName: "kube-api-access-p7gzv") pod "6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" (UID: "6b1dccba-b1ef-4b7c-aa80-ec15529e7a98"). InnerVolumeSpecName "kube-api-access-p7gzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.119098 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-config-data" (OuterVolumeSpecName: "config-data") pod "6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" (UID: "6b1dccba-b1ef-4b7c-aa80-ec15529e7a98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.166749 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7gzv\" (UniqueName: \"kubernetes.io/projected/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-kube-api-access-p7gzv\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.166795 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.167383 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" (UID: "6b1dccba-b1ef-4b7c-aa80-ec15529e7a98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.203131 4845 generic.go:334] "Generic (PLEG): container finished" podID="6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" containerID="f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad" exitCode=137 Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.203191 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98","Type":"ContainerDied","Data":"f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad"} Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.203226 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b1dccba-b1ef-4b7c-aa80-ec15529e7a98","Type":"ContainerDied","Data":"fe04f70476c707b2543738532ef74000620c8c2924f735fde10bff5d95053cb3"} Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.203249 4845 scope.go:117] "RemoveContainer" containerID="f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.203451 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.268948 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.269169 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.276125 4845 scope.go:117] "RemoveContainer" containerID="f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad" Feb 02 10:56:05 crc kubenswrapper[4845]: E0202 10:56:05.278296 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad\": container with ID starting with f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad not found: ID does not exist" containerID="f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.278331 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad"} err="failed to get container status \"f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad\": rpc error: code = NotFound desc = could not find container \"f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad\": container with ID starting with f2b8976e3e955a3582976396490af97d79417dc4788630c356c0e93e5f11bcad not found: ID does not exist" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.289476 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.318974 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:56:05 crc kubenswrapper[4845]: E0202 10:56:05.319666 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.319696 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.320089 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.321216 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.330293 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.330358 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.330311 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.333169 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.474932 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72h9m\" (UniqueName: \"kubernetes.io/projected/85bf6fdc-0816-4f80-966c-426f4906c581-kube-api-access-72h9m\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.475121 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.475154 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.475287 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.475377 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.579933 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.580567 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.580710 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.580835 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.581000 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72h9m\" (UniqueName: \"kubernetes.io/projected/85bf6fdc-0816-4f80-966c-426f4906c581-kube-api-access-72h9m\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.586237 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.586756 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.588215 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.589815 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85bf6fdc-0816-4f80-966c-426f4906c581-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.601584 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72h9m\" (UniqueName: \"kubernetes.io/projected/85bf6fdc-0816-4f80-966c-426f4906c581-kube-api-access-72h9m\") pod \"nova-cell1-novncproxy-0\" (UID: \"85bf6fdc-0816-4f80-966c-426f4906c581\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.651211 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:05 crc kubenswrapper[4845]: I0202 10:56:05.729695 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b1dccba-b1ef-4b7c-aa80-ec15529e7a98" path="/var/lib/kubelet/pods/6b1dccba-b1ef-4b7c-aa80-ec15529e7a98/volumes" Feb 02 10:56:06 crc kubenswrapper[4845]: I0202 10:56:06.143241 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:56:06 crc kubenswrapper[4845]: I0202 10:56:06.219757 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85bf6fdc-0816-4f80-966c-426f4906c581","Type":"ContainerStarted","Data":"57f9670fd6dc1ac7c811dd9e262312cf03489eba76b3d88e6deb5fa9fa5ebd8a"} Feb 02 10:56:07 crc kubenswrapper[4845]: I0202 10:56:07.233453 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85bf6fdc-0816-4f80-966c-426f4906c581","Type":"ContainerStarted","Data":"01a13e096495cac9c42990a9d83ca89789cbc700b5b4b2fc0018bb90dc2ecccb"} Feb 02 10:56:07 crc kubenswrapper[4845]: I0202 10:56:07.260376 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.260349724 podStartE2EDuration="2.260349724s" podCreationTimestamp="2026-02-02 10:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:07.251333891 +0000 UTC m=+1448.342735341" watchObservedRunningTime="2026-02-02 10:56:07.260349724 +0000 UTC m=+1448.351751174" Feb 02 10:56:07 crc kubenswrapper[4845]: I0202 10:56:07.302138 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 10:56:07 crc kubenswrapper[4845]: I0202 10:56:07.302681 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 10:56:07 crc kubenswrapper[4845]: I0202 10:56:07.302790 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 10:56:07 crc kubenswrapper[4845]: I0202 10:56:07.304695 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.257823 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.271818 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.441916 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-785dh"] Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.444299 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.465249 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-785dh"] Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.564833 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bsf\" (UniqueName: \"kubernetes.io/projected/330a4322-2c1c-4f9a-9093-bfae422cc1fb-kube-api-access-s4bsf\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.564906 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.564953 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-config\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.565024 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.565066 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.565089 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.667256 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.667306 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.667417 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bsf\" (UniqueName: \"kubernetes.io/projected/330a4322-2c1c-4f9a-9093-bfae422cc1fb-kube-api-access-s4bsf\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.667437 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.667486 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-config\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.667556 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.668311 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.668329 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.668417 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.668699 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.668876 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330a4322-2c1c-4f9a-9093-bfae422cc1fb-config\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.712154 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bsf\" (UniqueName: \"kubernetes.io/projected/330a4322-2c1c-4f9a-9093-bfae422cc1fb-kube-api-access-s4bsf\") pod \"dnsmasq-dns-6b7bbf7cf9-785dh\" (UID: \"330a4322-2c1c-4f9a-9093-bfae422cc1fb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:08 crc kubenswrapper[4845]: I0202 10:56:08.790582 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:09 crc kubenswrapper[4845]: I0202 10:56:09.421384 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-785dh"] Feb 02 10:56:10 crc kubenswrapper[4845]: I0202 10:56:10.294039 4845 generic.go:334] "Generic (PLEG): container finished" podID="330a4322-2c1c-4f9a-9093-bfae422cc1fb" containerID="a24ae3b07619062acab47128eaac209a8d18956597107a5ffa53abad39c595ee" exitCode=0 Feb 02 10:56:10 crc kubenswrapper[4845]: I0202 10:56:10.294100 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" event={"ID":"330a4322-2c1c-4f9a-9093-bfae422cc1fb","Type":"ContainerDied","Data":"a24ae3b07619062acab47128eaac209a8d18956597107a5ffa53abad39c595ee"} Feb 02 10:56:10 crc kubenswrapper[4845]: I0202 10:56:10.294505 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" event={"ID":"330a4322-2c1c-4f9a-9093-bfae422cc1fb","Type":"ContainerStarted","Data":"60f36e3b096718034ca98b3ef58919759a9c1e7b337d094bb4ca2e6a5d3d5de0"} Feb 02 10:56:10 crc kubenswrapper[4845]: I0202 10:56:10.653288 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.063130 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.064012 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="ceilometer-central-agent" containerID="cri-o://2eb041a66024bd6fdf0a4047a860f057456d28471e9ac67391405774838038c1" gracePeriod=30 Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.064156 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="proxy-httpd" containerID="cri-o://6656f047e77f8e3107d54021260879721254382d799001121bd164ebae15d9c9" gracePeriod=30 Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.064221 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="sg-core" containerID="cri-o://4e6899cd823b95ed877f86a8c1dd20bcde7abe8ab4e5dac19c5ab5f2085fe814" gracePeriod=30 Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.064273 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="ceilometer-notification-agent" containerID="cri-o://e82a8977aa57e2cfa579b4996bfd47a417b1401100d516e3057e129ade68d451" gracePeriod=30 Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.090017 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.260529 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.328011 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" event={"ID":"330a4322-2c1c-4f9a-9093-bfae422cc1fb","Type":"ContainerStarted","Data":"f80ccc3b5c32b7ad0f9e13715694bc96f5f8cedbeb2ed5bd60dac4af17084777"} Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.328094 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.340032 4845 generic.go:334] "Generic (PLEG): container finished" podID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerID="6656f047e77f8e3107d54021260879721254382d799001121bd164ebae15d9c9" exitCode=0 Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.340070 4845 generic.go:334] "Generic (PLEG): container finished" podID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerID="4e6899cd823b95ed877f86a8c1dd20bcde7abe8ab4e5dac19c5ab5f2085fe814" exitCode=2 Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.340130 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerDied","Data":"6656f047e77f8e3107d54021260879721254382d799001121bd164ebae15d9c9"} Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.340189 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerDied","Data":"4e6899cd823b95ed877f86a8c1dd20bcde7abe8ab4e5dac19c5ab5f2085fe814"} Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.340275 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-log" containerID="cri-o://b18cfab9ee8f5f4276f99dca37c12a86dbf672950e6b072ff69b9e4ffc29f67d" gracePeriod=30 Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.340392 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-api" containerID="cri-o://0bac2a8d77845121bf798ed7e7bbbcdf0da36dc5492619b8084fe476f2575739" gracePeriod=30 Feb 02 10:56:11 crc kubenswrapper[4845]: I0202 10:56:11.395667 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" podStartSLOduration=3.395647236 podStartE2EDuration="3.395647236s" podCreationTimestamp="2026-02-02 10:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:11.373109899 +0000 UTC m=+1452.464511349" watchObservedRunningTime="2026-02-02 10:56:11.395647236 +0000 UTC m=+1452.487048686" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.363822 4845 generic.go:334] "Generic (PLEG): container finished" podID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerID="e82a8977aa57e2cfa579b4996bfd47a417b1401100d516e3057e129ade68d451" exitCode=0 Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.363856 4845 generic.go:334] "Generic (PLEG): container finished" podID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerID="2eb041a66024bd6fdf0a4047a860f057456d28471e9ac67391405774838038c1" exitCode=0 Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.363931 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerDied","Data":"e82a8977aa57e2cfa579b4996bfd47a417b1401100d516e3057e129ade68d451"} Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.363963 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerDied","Data":"2eb041a66024bd6fdf0a4047a860f057456d28471e9ac67391405774838038c1"} Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.371011 4845 generic.go:334] "Generic (PLEG): container finished" podID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerID="b18cfab9ee8f5f4276f99dca37c12a86dbf672950e6b072ff69b9e4ffc29f67d" exitCode=143 Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.372441 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55","Type":"ContainerDied","Data":"b18cfab9ee8f5f4276f99dca37c12a86dbf672950e6b072ff69b9e4ffc29f67d"} Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.620584 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.661917 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-scripts\") pod \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.661981 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44xlk\" (UniqueName: \"kubernetes.io/projected/c8162bea-daa1-42e3-8921-3c12ad56dfa6-kube-api-access-44xlk\") pod \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.662021 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-sg-core-conf-yaml\") pod \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.662114 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-log-httpd\") pod \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.662464 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-run-httpd\") pod \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.662493 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-ceilometer-tls-certs\") pod \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.662555 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-combined-ca-bundle\") pod \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.662589 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-config-data\") pod \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\" (UID: \"c8162bea-daa1-42e3-8921-3c12ad56dfa6\") " Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.663327 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c8162bea-daa1-42e3-8921-3c12ad56dfa6" (UID: "c8162bea-daa1-42e3-8921-3c12ad56dfa6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.663675 4845 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.663734 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c8162bea-daa1-42e3-8921-3c12ad56dfa6" (UID: "c8162bea-daa1-42e3-8921-3c12ad56dfa6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.670052 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8162bea-daa1-42e3-8921-3c12ad56dfa6-kube-api-access-44xlk" (OuterVolumeSpecName: "kube-api-access-44xlk") pod "c8162bea-daa1-42e3-8921-3c12ad56dfa6" (UID: "c8162bea-daa1-42e3-8921-3c12ad56dfa6"). InnerVolumeSpecName "kube-api-access-44xlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.703836 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-scripts" (OuterVolumeSpecName: "scripts") pod "c8162bea-daa1-42e3-8921-3c12ad56dfa6" (UID: "c8162bea-daa1-42e3-8921-3c12ad56dfa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.773817 4845 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8162bea-daa1-42e3-8921-3c12ad56dfa6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.773858 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.773867 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44xlk\" (UniqueName: \"kubernetes.io/projected/c8162bea-daa1-42e3-8921-3c12ad56dfa6-kube-api-access-44xlk\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.792164 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c8162bea-daa1-42e3-8921-3c12ad56dfa6" (UID: "c8162bea-daa1-42e3-8921-3c12ad56dfa6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.840055 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c8162bea-daa1-42e3-8921-3c12ad56dfa6" (UID: "c8162bea-daa1-42e3-8921-3c12ad56dfa6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.876711 4845 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.876991 4845 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.877995 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8162bea-daa1-42e3-8921-3c12ad56dfa6" (UID: "c8162bea-daa1-42e3-8921-3c12ad56dfa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4845]: I0202 10:56:12.980314 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.013700 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-config-data" (OuterVolumeSpecName: "config-data") pod "c8162bea-daa1-42e3-8921-3c12ad56dfa6" (UID: "c8162bea-daa1-42e3-8921-3c12ad56dfa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.084645 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8162bea-daa1-42e3-8921-3c12ad56dfa6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.387604 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8162bea-daa1-42e3-8921-3c12ad56dfa6","Type":"ContainerDied","Data":"2275bc3cbd2ca4cb0365fb35199c6ba0dd53592c51fe579e48d78b478ab3a151"} Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.387657 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.387672 4845 scope.go:117] "RemoveContainer" containerID="6656f047e77f8e3107d54021260879721254382d799001121bd164ebae15d9c9" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.419685 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.425736 4845 scope.go:117] "RemoveContainer" containerID="4e6899cd823b95ed877f86a8c1dd20bcde7abe8ab4e5dac19c5ab5f2085fe814" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.431385 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.456969 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:13 crc kubenswrapper[4845]: E0202 10:56:13.458459 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="proxy-httpd" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.458490 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="proxy-httpd" Feb 02 10:56:13 crc kubenswrapper[4845]: E0202 10:56:13.458511 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="sg-core" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.458520 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="sg-core" Feb 02 10:56:13 crc kubenswrapper[4845]: E0202 10:56:13.458594 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="ceilometer-central-agent" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.458608 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="ceilometer-central-agent" Feb 02 10:56:13 crc kubenswrapper[4845]: E0202 10:56:13.458623 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="ceilometer-notification-agent" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.458637 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="ceilometer-notification-agent" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.462416 4845 scope.go:117] "RemoveContainer" containerID="e82a8977aa57e2cfa579b4996bfd47a417b1401100d516e3057e129ade68d451" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.463345 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="proxy-httpd" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.463415 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="sg-core" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.463454 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="ceilometer-central-agent" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.463473 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="ceilometer-notification-agent" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.477808 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.481769 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.482011 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.481905 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.513413 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.565934 4845 scope.go:117] "RemoveContainer" containerID="2eb041a66024bd6fdf0a4047a860f057456d28471e9ac67391405774838038c1" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.621347 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrkdz\" (UniqueName: \"kubernetes.io/projected/41f8f678-af44-4ddc-b2db-01f96bae8601-kube-api-access-rrkdz\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.621422 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-run-httpd\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.621465 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-config-data\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.621489 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.621505 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.621544 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-log-httpd\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.621577 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-scripts\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.621595 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.723219 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrkdz\" (UniqueName: \"kubernetes.io/projected/41f8f678-af44-4ddc-b2db-01f96bae8601-kube-api-access-rrkdz\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.723309 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-run-httpd\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.723367 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-config-data\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.723404 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.723424 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.723478 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-log-httpd\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.723527 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-scripts\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.723556 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.724143 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-run-httpd\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.724404 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-log-httpd\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.728081 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" path="/var/lib/kubelet/pods/c8162bea-daa1-42e3-8921-3c12ad56dfa6/volumes" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.740709 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.741759 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-scripts\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.741990 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.742115 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-config-data\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.742412 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.747127 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrkdz\" (UniqueName: \"kubernetes.io/projected/41f8f678-af44-4ddc-b2db-01f96bae8601-kube-api-access-rrkdz\") pod \"ceilometer-0\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.864715 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:56:13 crc kubenswrapper[4845]: I0202 10:56:13.888076 4845 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod516e4c98-314a-4116-b0fc-45c18fd1c7e1"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod516e4c98-314a-4116-b0fc-45c18fd1c7e1] : Timed out while waiting for systemd to remove kubepods-besteffort-pod516e4c98_314a_4116_b0fc_45c18fd1c7e1.slice" Feb 02 10:56:14 crc kubenswrapper[4845]: I0202 10:56:14.205933 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:14 crc kubenswrapper[4845]: I0202 10:56:14.391533 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:14 crc kubenswrapper[4845]: W0202 10:56:14.393053 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41f8f678_af44_4ddc_b2db_01f96bae8601.slice/crio-568792204d8f6e91541e175f4c66f405089f0798e5551671c568bec6667a39ac WatchSource:0}: Error finding container 568792204d8f6e91541e175f4c66f405089f0798e5551671c568bec6667a39ac: Status 404 returned error can't find the container with id 568792204d8f6e91541e175f4c66f405089f0798e5551671c568bec6667a39ac Feb 02 10:56:15 crc kubenswrapper[4845]: I0202 10:56:15.413234 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerStarted","Data":"27f273dfdbc6f48a241cefa433f71908752cdbef7c94c8988ca4fbc3330d687c"} Feb 02 10:56:15 crc kubenswrapper[4845]: I0202 10:56:15.413505 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerStarted","Data":"568792204d8f6e91541e175f4c66f405089f0798e5551671c568bec6667a39ac"} Feb 02 10:56:15 crc kubenswrapper[4845]: I0202 10:56:15.652132 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:15 crc kubenswrapper[4845]: I0202 10:56:15.675538 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.237256 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.237629 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.432687 4845 generic.go:334] "Generic (PLEG): container finished" podID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerID="0bac2a8d77845121bf798ed7e7bbbcdf0da36dc5492619b8084fe476f2575739" exitCode=0 Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.432799 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55","Type":"ContainerDied","Data":"0bac2a8d77845121bf798ed7e7bbbcdf0da36dc5492619b8084fe476f2575739"} Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.440141 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerStarted","Data":"2260ccaf2924a7de756705c5ed9aff5c1bd71d1d2fc36f15c40d4d5dc70e271f"} Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.468664 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.570782 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.699987 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-config-data\") pod \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.700072 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-combined-ca-bundle\") pod \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.700292 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m98z8\" (UniqueName: \"kubernetes.io/projected/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-kube-api-access-m98z8\") pod \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.700473 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-logs\") pod \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\" (UID: \"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55\") " Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.701114 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-logs" (OuterVolumeSpecName: "logs") pod "7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" (UID: "7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.701623 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.706911 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-kube-api-access-m98z8" (OuterVolumeSpecName: "kube-api-access-m98z8") pod "7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" (UID: "7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55"). InnerVolumeSpecName "kube-api-access-m98z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.742813 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5zwdv"] Feb 02 10:56:16 crc kubenswrapper[4845]: E0202 10:56:16.743366 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-log" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.743383 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-log" Feb 02 10:56:16 crc kubenswrapper[4845]: E0202 10:56:16.743419 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-api" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.743426 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-api" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.743639 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-log" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.743672 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" containerName="nova-api-api" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.744483 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.775342 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.775549 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.797022 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" (UID: "7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.807780 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5zwdv"] Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.813609 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-config-data" (OuterVolumeSpecName: "config-data") pod "7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" (UID: "7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.813699 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-scripts\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.814034 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.814129 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-config-data\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.814360 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8k2h\" (UniqueName: \"kubernetes.io/projected/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-kube-api-access-j8k2h\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.814481 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.814494 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.814505 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m98z8\" (UniqueName: \"kubernetes.io/projected/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55-kube-api-access-m98z8\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.919576 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-config-data\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.919878 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8k2h\" (UniqueName: \"kubernetes.io/projected/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-kube-api-access-j8k2h\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.919956 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-scripts\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.920257 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.924007 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.946387 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8k2h\" (UniqueName: \"kubernetes.io/projected/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-kube-api-access-j8k2h\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.947348 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-config-data\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:16 crc kubenswrapper[4845]: I0202 10:56:16.947733 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-scripts\") pod \"nova-cell1-cell-mapping-5zwdv\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.102037 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.455176 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerStarted","Data":"a59beb72591810706d3416b2fffeb070930f113386bdfdb4800fa23b079aa1da"} Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.457535 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.457591 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55","Type":"ContainerDied","Data":"07a8bb175c6ab486cf8171570281ab764339f6cc80042312c0c66b5beebb4313"} Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.457667 4845 scope.go:117] "RemoveContainer" containerID="0bac2a8d77845121bf798ed7e7bbbcdf0da36dc5492619b8084fe476f2575739" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.490032 4845 scope.go:117] "RemoveContainer" containerID="b18cfab9ee8f5f4276f99dca37c12a86dbf672950e6b072ff69b9e4ffc29f67d" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.548960 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.581957 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.618681 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.620548 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.630686 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.634977 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.636271 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.651944 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.734925 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55" path="/var/lib/kubelet/pods/7a2a4f87-44b1-4d9b-b4a9-687bbf6d2e55/volumes" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.735708 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5zwdv"] Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.749432 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-public-tls-certs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.749633 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.749717 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-internal-tls-certs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.749788 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-config-data\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.749870 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722f6d8a-3c97-4061-b26a-f8ec00f65006-logs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.749948 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfh5\" (UniqueName: \"kubernetes.io/projected/722f6d8a-3c97-4061-b26a-f8ec00f65006-kube-api-access-8wfh5\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.852933 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-config-data\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.853085 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722f6d8a-3c97-4061-b26a-f8ec00f65006-logs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.853972 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722f6d8a-3c97-4061-b26a-f8ec00f65006-logs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.854040 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wfh5\" (UniqueName: \"kubernetes.io/projected/722f6d8a-3c97-4061-b26a-f8ec00f65006-kube-api-access-8wfh5\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.854162 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-public-tls-certs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.854353 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.854486 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-internal-tls-certs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.859905 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-internal-tls-certs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.860235 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-config-data\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.861644 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-public-tls-certs\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.861944 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.877282 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wfh5\" (UniqueName: \"kubernetes.io/projected/722f6d8a-3c97-4061-b26a-f8ec00f65006-kube-api-access-8wfh5\") pod \"nova-api-0\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " pod="openstack/nova-api-0" Feb 02 10:56:17 crc kubenswrapper[4845]: I0202 10:56:17.946359 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:56:18 crc kubenswrapper[4845]: I0202 10:56:18.472203 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5zwdv" event={"ID":"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b","Type":"ContainerStarted","Data":"8584a98006ccb553e74a988ef9d575c49271ebfc304ac836fbd4745ff8e13b8d"} Feb 02 10:56:18 crc kubenswrapper[4845]: I0202 10:56:18.472566 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5zwdv" event={"ID":"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b","Type":"ContainerStarted","Data":"29f5f8cab78cd261d0abd37ffd5974b0926bb3d10de649b68d9d21e2b1208d82"} Feb 02 10:56:18 crc kubenswrapper[4845]: I0202 10:56:18.512444 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5zwdv" podStartSLOduration=2.512420627 podStartE2EDuration="2.512420627s" podCreationTimestamp="2026-02-02 10:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:18.502707604 +0000 UTC m=+1459.594109054" watchObservedRunningTime="2026-02-02 10:56:18.512420627 +0000 UTC m=+1459.603822067" Feb 02 10:56:18 crc kubenswrapper[4845]: W0202 10:56:18.549350 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722f6d8a_3c97_4061_b26a_f8ec00f65006.slice/crio-33b69e32755cf4bbe30b319fc4768886cb90925b277cb1bb998366cf1a27877f WatchSource:0}: Error finding container 33b69e32755cf4bbe30b319fc4768886cb90925b277cb1bb998366cf1a27877f: Status 404 returned error can't find the container with id 33b69e32755cf4bbe30b319fc4768886cb90925b277cb1bb998366cf1a27877f Feb 02 10:56:18 crc kubenswrapper[4845]: I0202 10:56:18.560387 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:18 crc kubenswrapper[4845]: I0202 10:56:18.793053 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-785dh" Feb 02 10:56:18 crc kubenswrapper[4845]: I0202 10:56:18.871391 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hvdzc"] Feb 02 10:56:18 crc kubenswrapper[4845]: I0202 10:56:18.871699 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" podUID="83cd6f6d-3615-46e0-875a-e1cec10e9631" containerName="dnsmasq-dns" containerID="cri-o://4588a5acb74ef82abd161d26d28f375cb1f56efac97a86455f032c5662e7188e" gracePeriod=10 Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.542513 4845 generic.go:334] "Generic (PLEG): container finished" podID="83cd6f6d-3615-46e0-875a-e1cec10e9631" containerID="4588a5acb74ef82abd161d26d28f375cb1f56efac97a86455f032c5662e7188e" exitCode=0 Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.542843 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" event={"ID":"83cd6f6d-3615-46e0-875a-e1cec10e9631","Type":"ContainerDied","Data":"4588a5acb74ef82abd161d26d28f375cb1f56efac97a86455f032c5662e7188e"} Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.547131 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"722f6d8a-3c97-4061-b26a-f8ec00f65006","Type":"ContainerStarted","Data":"a76cab8c49a716e1851e502e56d3f3205dc6fe1e003ead53af25ac18bdab30b7"} Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.547170 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"722f6d8a-3c97-4061-b26a-f8ec00f65006","Type":"ContainerStarted","Data":"64b410e414340d61d34d86ceee1eabcc7870da5967de863190b7a1e715af98f3"} Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.547181 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"722f6d8a-3c97-4061-b26a-f8ec00f65006","Type":"ContainerStarted","Data":"33b69e32755cf4bbe30b319fc4768886cb90925b277cb1bb998366cf1a27877f"} Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.609864 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.609838104 podStartE2EDuration="2.609838104s" podCreationTimestamp="2026-02-02 10:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:19.571256789 +0000 UTC m=+1460.662658259" watchObservedRunningTime="2026-02-02 10:56:19.609838104 +0000 UTC m=+1460.701239554" Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.627604 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.751443 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-svc\") pod \"83cd6f6d-3615-46e0-875a-e1cec10e9631\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.751609 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-nb\") pod \"83cd6f6d-3615-46e0-875a-e1cec10e9631\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.751668 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-config\") pod \"83cd6f6d-3615-46e0-875a-e1cec10e9631\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.751742 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-swift-storage-0\") pod \"83cd6f6d-3615-46e0-875a-e1cec10e9631\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.751771 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-sb\") pod \"83cd6f6d-3615-46e0-875a-e1cec10e9631\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.751820 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f27jh\" (UniqueName: \"kubernetes.io/projected/83cd6f6d-3615-46e0-875a-e1cec10e9631-kube-api-access-f27jh\") pod \"83cd6f6d-3615-46e0-875a-e1cec10e9631\" (UID: \"83cd6f6d-3615-46e0-875a-e1cec10e9631\") " Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.768577 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83cd6f6d-3615-46e0-875a-e1cec10e9631-kube-api-access-f27jh" (OuterVolumeSpecName: "kube-api-access-f27jh") pod "83cd6f6d-3615-46e0-875a-e1cec10e9631" (UID: "83cd6f6d-3615-46e0-875a-e1cec10e9631"). InnerVolumeSpecName "kube-api-access-f27jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4845]: I0202 10:56:19.855380 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f27jh\" (UniqueName: \"kubernetes.io/projected/83cd6f6d-3615-46e0-875a-e1cec10e9631-kube-api-access-f27jh\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.018016 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "83cd6f6d-3615-46e0-875a-e1cec10e9631" (UID: "83cd6f6d-3615-46e0-875a-e1cec10e9631"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.024352 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83cd6f6d-3615-46e0-875a-e1cec10e9631" (UID: "83cd6f6d-3615-46e0-875a-e1cec10e9631"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.036134 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "83cd6f6d-3615-46e0-875a-e1cec10e9631" (UID: "83cd6f6d-3615-46e0-875a-e1cec10e9631"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.048434 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "83cd6f6d-3615-46e0-875a-e1cec10e9631" (UID: "83cd6f6d-3615-46e0-875a-e1cec10e9631"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.048789 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-config" (OuterVolumeSpecName: "config") pod "83cd6f6d-3615-46e0-875a-e1cec10e9631" (UID: "83cd6f6d-3615-46e0-875a-e1cec10e9631"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.060285 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.060318 4845 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.060329 4845 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.060339 4845 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.060362 4845 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd6f6d-3615-46e0-875a-e1cec10e9631-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.558637 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" event={"ID":"83cd6f6d-3615-46e0-875a-e1cec10e9631","Type":"ContainerDied","Data":"5459dadbda58e0ce878baeea7244a3a46fe1a38ff8b8032f5afcf3e9f7c8bd0d"} Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.558670 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hvdzc" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.558704 4845 scope.go:117] "RemoveContainer" containerID="4588a5acb74ef82abd161d26d28f375cb1f56efac97a86455f032c5662e7188e" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.563354 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerStarted","Data":"5f868885fdf4c7ccdaf9be4e2929167b43f5c44340ad27b9a68723598a5a4d1d"} Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.563627 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="proxy-httpd" containerID="cri-o://5f868885fdf4c7ccdaf9be4e2929167b43f5c44340ad27b9a68723598a5a4d1d" gracePeriod=30 Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.563637 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="sg-core" containerID="cri-o://a59beb72591810706d3416b2fffeb070930f113386bdfdb4800fa23b079aa1da" gracePeriod=30 Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.563599 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="ceilometer-central-agent" containerID="cri-o://27f273dfdbc6f48a241cefa433f71908752cdbef7c94c8988ca4fbc3330d687c" gracePeriod=30 Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.563744 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="ceilometer-notification-agent" containerID="cri-o://2260ccaf2924a7de756705c5ed9aff5c1bd71d1d2fc36f15c40d4d5dc70e271f" gracePeriod=30 Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.588099 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.575834855 podStartE2EDuration="7.588076886s" podCreationTimestamp="2026-02-02 10:56:13 +0000 UTC" firstStartedPulling="2026-02-02 10:56:14.396530081 +0000 UTC m=+1455.487931541" lastFinishedPulling="2026-02-02 10:56:19.408772122 +0000 UTC m=+1460.500173572" observedRunningTime="2026-02-02 10:56:20.587215191 +0000 UTC m=+1461.678616641" watchObservedRunningTime="2026-02-02 10:56:20.588076886 +0000 UTC m=+1461.679478336" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.602160 4845 scope.go:117] "RemoveContainer" containerID="f824758afe3c663578c7666b1f094db1caee520920ba737c52e548b07f3ee9ad" Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.623861 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hvdzc"] Feb 02 10:56:20 crc kubenswrapper[4845]: I0202 10:56:20.634302 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hvdzc"] Feb 02 10:56:21 crc kubenswrapper[4845]: I0202 10:56:21.584292 4845 generic.go:334] "Generic (PLEG): container finished" podID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerID="5f868885fdf4c7ccdaf9be4e2929167b43f5c44340ad27b9a68723598a5a4d1d" exitCode=0 Feb 02 10:56:21 crc kubenswrapper[4845]: I0202 10:56:21.584610 4845 generic.go:334] "Generic (PLEG): container finished" podID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerID="a59beb72591810706d3416b2fffeb070930f113386bdfdb4800fa23b079aa1da" exitCode=2 Feb 02 10:56:21 crc kubenswrapper[4845]: I0202 10:56:21.584619 4845 generic.go:334] "Generic (PLEG): container finished" podID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerID="2260ccaf2924a7de756705c5ed9aff5c1bd71d1d2fc36f15c40d4d5dc70e271f" exitCode=0 Feb 02 10:56:21 crc kubenswrapper[4845]: I0202 10:56:21.584366 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerDied","Data":"5f868885fdf4c7ccdaf9be4e2929167b43f5c44340ad27b9a68723598a5a4d1d"} Feb 02 10:56:21 crc kubenswrapper[4845]: I0202 10:56:21.584667 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerDied","Data":"a59beb72591810706d3416b2fffeb070930f113386bdfdb4800fa23b079aa1da"} Feb 02 10:56:21 crc kubenswrapper[4845]: I0202 10:56:21.584685 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerDied","Data":"2260ccaf2924a7de756705c5ed9aff5c1bd71d1d2fc36f15c40d4d5dc70e271f"} Feb 02 10:56:21 crc kubenswrapper[4845]: I0202 10:56:21.731439 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83cd6f6d-3615-46e0-875a-e1cec10e9631" path="/var/lib/kubelet/pods/83cd6f6d-3615-46e0-875a-e1cec10e9631/volumes" Feb 02 10:56:24 crc kubenswrapper[4845]: I0202 10:56:24.631934 4845 generic.go:334] "Generic (PLEG): container finished" podID="2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" containerID="8584a98006ccb553e74a988ef9d575c49271ebfc304ac836fbd4745ff8e13b8d" exitCode=0 Feb 02 10:56:24 crc kubenswrapper[4845]: I0202 10:56:24.632055 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5zwdv" event={"ID":"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b","Type":"ContainerDied","Data":"8584a98006ccb553e74a988ef9d575c49271ebfc304ac836fbd4745ff8e13b8d"} Feb 02 10:56:25 crc kubenswrapper[4845]: I0202 10:56:25.649049 4845 generic.go:334] "Generic (PLEG): container finished" podID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerID="27f273dfdbc6f48a241cefa433f71908752cdbef7c94c8988ca4fbc3330d687c" exitCode=0 Feb 02 10:56:25 crc kubenswrapper[4845]: I0202 10:56:25.649127 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerDied","Data":"27f273dfdbc6f48a241cefa433f71908752cdbef7c94c8988ca4fbc3330d687c"} Feb 02 10:56:25 crc kubenswrapper[4845]: I0202 10:56:25.972222 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.135296 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-run-httpd\") pod \"41f8f678-af44-4ddc-b2db-01f96bae8601\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.135693 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-combined-ca-bundle\") pod \"41f8f678-af44-4ddc-b2db-01f96bae8601\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.135757 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-sg-core-conf-yaml\") pod \"41f8f678-af44-4ddc-b2db-01f96bae8601\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.135838 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-config-data\") pod \"41f8f678-af44-4ddc-b2db-01f96bae8601\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.135857 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "41f8f678-af44-4ddc-b2db-01f96bae8601" (UID: "41f8f678-af44-4ddc-b2db-01f96bae8601"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.136022 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-log-httpd\") pod \"41f8f678-af44-4ddc-b2db-01f96bae8601\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.136131 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrkdz\" (UniqueName: \"kubernetes.io/projected/41f8f678-af44-4ddc-b2db-01f96bae8601-kube-api-access-rrkdz\") pod \"41f8f678-af44-4ddc-b2db-01f96bae8601\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.136249 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-ceilometer-tls-certs\") pod \"41f8f678-af44-4ddc-b2db-01f96bae8601\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.136362 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-scripts\") pod \"41f8f678-af44-4ddc-b2db-01f96bae8601\" (UID: \"41f8f678-af44-4ddc-b2db-01f96bae8601\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.136539 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "41f8f678-af44-4ddc-b2db-01f96bae8601" (UID: "41f8f678-af44-4ddc-b2db-01f96bae8601"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.137261 4845 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.137289 4845 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41f8f678-af44-4ddc-b2db-01f96bae8601-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.143289 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f8f678-af44-4ddc-b2db-01f96bae8601-kube-api-access-rrkdz" (OuterVolumeSpecName: "kube-api-access-rrkdz") pod "41f8f678-af44-4ddc-b2db-01f96bae8601" (UID: "41f8f678-af44-4ddc-b2db-01f96bae8601"). InnerVolumeSpecName "kube-api-access-rrkdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.144336 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-scripts" (OuterVolumeSpecName: "scripts") pod "41f8f678-af44-4ddc-b2db-01f96bae8601" (UID: "41f8f678-af44-4ddc-b2db-01f96bae8601"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.146956 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.207672 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "41f8f678-af44-4ddc-b2db-01f96bae8601" (UID: "41f8f678-af44-4ddc-b2db-01f96bae8601"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.238835 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-combined-ca-bundle\") pod \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.239115 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8k2h\" (UniqueName: \"kubernetes.io/projected/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-kube-api-access-j8k2h\") pod \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.239250 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-scripts\") pod \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.239345 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-config-data\") pod \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\" (UID: \"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b\") " Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.240309 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrkdz\" (UniqueName: \"kubernetes.io/projected/41f8f678-af44-4ddc-b2db-01f96bae8601-kube-api-access-rrkdz\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.240338 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.240353 4845 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.243505 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-scripts" (OuterVolumeSpecName: "scripts") pod "2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" (UID: "2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.243531 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-kube-api-access-j8k2h" (OuterVolumeSpecName: "kube-api-access-j8k2h") pod "2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" (UID: "2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b"). InnerVolumeSpecName "kube-api-access-j8k2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.245519 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "41f8f678-af44-4ddc-b2db-01f96bae8601" (UID: "41f8f678-af44-4ddc-b2db-01f96bae8601"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.274360 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-config-data" (OuterVolumeSpecName: "config-data") pod "2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" (UID: "2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.274905 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" (UID: "2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.276257 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41f8f678-af44-4ddc-b2db-01f96bae8601" (UID: "41f8f678-af44-4ddc-b2db-01f96bae8601"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.311693 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-config-data" (OuterVolumeSpecName: "config-data") pod "41f8f678-af44-4ddc-b2db-01f96bae8601" (UID: "41f8f678-af44-4ddc-b2db-01f96bae8601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.342469 4845 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.342516 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.342529 4845 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.342542 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.342552 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.342560 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f8f678-af44-4ddc-b2db-01f96bae8601-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.342569 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8k2h\" (UniqueName: \"kubernetes.io/projected/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b-kube-api-access-j8k2h\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.676122 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41f8f678-af44-4ddc-b2db-01f96bae8601","Type":"ContainerDied","Data":"568792204d8f6e91541e175f4c66f405089f0798e5551671c568bec6667a39ac"} Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.676209 4845 scope.go:117] "RemoveContainer" containerID="5f868885fdf4c7ccdaf9be4e2929167b43f5c44340ad27b9a68723598a5a4d1d" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.677216 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.683379 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5zwdv" event={"ID":"2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b","Type":"ContainerDied","Data":"29f5f8cab78cd261d0abd37ffd5974b0926bb3d10de649b68d9d21e2b1208d82"} Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.683421 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f5f8cab78cd261d0abd37ffd5974b0926bb3d10de649b68d9d21e2b1208d82" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.683487 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5zwdv" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.723315 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.724504 4845 scope.go:117] "RemoveContainer" containerID="a59beb72591810706d3416b2fffeb070930f113386bdfdb4800fa23b079aa1da" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.744295 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.757638 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:26 crc kubenswrapper[4845]: E0202 10:56:26.759057 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="ceilometer-notification-agent" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.759195 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="ceilometer-notification-agent" Feb 02 10:56:26 crc kubenswrapper[4845]: E0202 10:56:26.759275 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="sg-core" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.759325 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="sg-core" Feb 02 10:56:26 crc kubenswrapper[4845]: E0202 10:56:26.759417 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83cd6f6d-3615-46e0-875a-e1cec10e9631" containerName="dnsmasq-dns" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.759466 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="83cd6f6d-3615-46e0-875a-e1cec10e9631" containerName="dnsmasq-dns" Feb 02 10:56:26 crc kubenswrapper[4845]: E0202 10:56:26.759517 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" containerName="nova-manage" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.759572 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" containerName="nova-manage" Feb 02 10:56:26 crc kubenswrapper[4845]: E0202 10:56:26.759710 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="proxy-httpd" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.759792 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="proxy-httpd" Feb 02 10:56:26 crc kubenswrapper[4845]: E0202 10:56:26.759903 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="ceilometer-central-agent" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.759986 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="ceilometer-central-agent" Feb 02 10:56:26 crc kubenswrapper[4845]: E0202 10:56:26.760066 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83cd6f6d-3615-46e0-875a-e1cec10e9631" containerName="init" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.760123 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="83cd6f6d-3615-46e0-875a-e1cec10e9631" containerName="init" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.760500 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="ceilometer-notification-agent" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.760567 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="sg-core" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.760638 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="ceilometer-central-agent" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.760707 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="83cd6f6d-3615-46e0-875a-e1cec10e9631" containerName="dnsmasq-dns" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.760768 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" containerName="proxy-httpd" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.760834 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" containerName="nova-manage" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.763475 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.768412 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.768554 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.770608 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.776096 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.791076 4845 scope.go:117] "RemoveContainer" containerID="2260ccaf2924a7de756705c5ed9aff5c1bd71d1d2fc36f15c40d4d5dc70e271f" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.819749 4845 scope.go:117] "RemoveContainer" containerID="27f273dfdbc6f48a241cefa433f71908752cdbef7c94c8988ca4fbc3330d687c" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.854719 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-scripts\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.854787 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.860035 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-config-data\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.860130 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frkbj\" (UniqueName: \"kubernetes.io/projected/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-kube-api-access-frkbj\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.860158 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.860256 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.860328 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.860447 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.938680 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.939171 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="652d8576-912d-4384-b487-aa6b987b567f" containerName="nova-scheduler-scheduler" containerID="cri-o://444f6ac6315562315a504c0600f21ab36cd45bf454f4002a51c1be86e2c6f5bb" gracePeriod=30 Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963153 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-scripts\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963210 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963288 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-config-data\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963318 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frkbj\" (UniqueName: \"kubernetes.io/projected/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-kube-api-access-frkbj\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963338 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963361 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963389 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963431 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963755 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.963849 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.964143 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.964305 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerName="nova-api-log" containerID="cri-o://64b410e414340d61d34d86ceee1eabcc7870da5967de863190b7a1e715af98f3" gracePeriod=30 Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.964546 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerName="nova-api-api" containerID="cri-o://a76cab8c49a716e1851e502e56d3f3205dc6fe1e003ead53af25ac18bdab30b7" gracePeriod=30 Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.968371 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.969463 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.969564 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.973627 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-config-data\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.982636 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-scripts\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.985898 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.986170 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-log" containerID="cri-o://d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79" gracePeriod=30 Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.986317 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-metadata" containerID="cri-o://1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d" gracePeriod=30 Feb 02 10:56:26 crc kubenswrapper[4845]: I0202 10:56:26.999699 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frkbj\" (UniqueName: \"kubernetes.io/projected/813ec32b-5cd3-491d-85ac-bcf0140d0a8f-kube-api-access-frkbj\") pod \"ceilometer-0\" (UID: \"813ec32b-5cd3-491d-85ac-bcf0140d0a8f\") " pod="openstack/ceilometer-0" Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.083941 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.671734 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:56:27 crc kubenswrapper[4845]: W0202 10:56:27.676044 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod813ec32b_5cd3_491d_85ac_bcf0140d0a8f.slice/crio-49d1e4a21ffc7b3a8cdc928016c2ee2786bd0f5de81b8f9bf0cd6f9da50cc727 WatchSource:0}: Error finding container 49d1e4a21ffc7b3a8cdc928016c2ee2786bd0f5de81b8f9bf0cd6f9da50cc727: Status 404 returned error can't find the container with id 49d1e4a21ffc7b3a8cdc928016c2ee2786bd0f5de81b8f9bf0cd6f9da50cc727 Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.741560 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f8f678-af44-4ddc-b2db-01f96bae8601" path="/var/lib/kubelet/pods/41f8f678-af44-4ddc-b2db-01f96bae8601/volumes" Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.742519 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"813ec32b-5cd3-491d-85ac-bcf0140d0a8f","Type":"ContainerStarted","Data":"49d1e4a21ffc7b3a8cdc928016c2ee2786bd0f5de81b8f9bf0cd6f9da50cc727"} Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.756954 4845 generic.go:334] "Generic (PLEG): container finished" podID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerID="d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79" exitCode=143 Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.757063 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab48fc91-e9f1-4362-8cb8-091846601a7e","Type":"ContainerDied","Data":"d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79"} Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.780457 4845 generic.go:334] "Generic (PLEG): container finished" podID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerID="a76cab8c49a716e1851e502e56d3f3205dc6fe1e003ead53af25ac18bdab30b7" exitCode=0 Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.780499 4845 generic.go:334] "Generic (PLEG): container finished" podID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerID="64b410e414340d61d34d86ceee1eabcc7870da5967de863190b7a1e715af98f3" exitCode=143 Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.780523 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"722f6d8a-3c97-4061-b26a-f8ec00f65006","Type":"ContainerDied","Data":"a76cab8c49a716e1851e502e56d3f3205dc6fe1e003ead53af25ac18bdab30b7"} Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.780589 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"722f6d8a-3c97-4061-b26a-f8ec00f65006","Type":"ContainerDied","Data":"64b410e414340d61d34d86ceee1eabcc7870da5967de863190b7a1e715af98f3"} Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.844586 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.997935 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wfh5\" (UniqueName: \"kubernetes.io/projected/722f6d8a-3c97-4061-b26a-f8ec00f65006-kube-api-access-8wfh5\") pod \"722f6d8a-3c97-4061-b26a-f8ec00f65006\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.997992 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-combined-ca-bundle\") pod \"722f6d8a-3c97-4061-b26a-f8ec00f65006\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.998108 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-public-tls-certs\") pod \"722f6d8a-3c97-4061-b26a-f8ec00f65006\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.998142 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-internal-tls-certs\") pod \"722f6d8a-3c97-4061-b26a-f8ec00f65006\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.998350 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-config-data\") pod \"722f6d8a-3c97-4061-b26a-f8ec00f65006\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.998417 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722f6d8a-3c97-4061-b26a-f8ec00f65006-logs\") pod \"722f6d8a-3c97-4061-b26a-f8ec00f65006\" (UID: \"722f6d8a-3c97-4061-b26a-f8ec00f65006\") " Feb 02 10:56:27 crc kubenswrapper[4845]: I0202 10:56:27.999492 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/722f6d8a-3c97-4061-b26a-f8ec00f65006-logs" (OuterVolumeSpecName: "logs") pod "722f6d8a-3c97-4061-b26a-f8ec00f65006" (UID: "722f6d8a-3c97-4061-b26a-f8ec00f65006"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.042148 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722f6d8a-3c97-4061-b26a-f8ec00f65006-kube-api-access-8wfh5" (OuterVolumeSpecName: "kube-api-access-8wfh5") pod "722f6d8a-3c97-4061-b26a-f8ec00f65006" (UID: "722f6d8a-3c97-4061-b26a-f8ec00f65006"). InnerVolumeSpecName "kube-api-access-8wfh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.053943 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-config-data" (OuterVolumeSpecName: "config-data") pod "722f6d8a-3c97-4061-b26a-f8ec00f65006" (UID: "722f6d8a-3c97-4061-b26a-f8ec00f65006"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.074032 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "722f6d8a-3c97-4061-b26a-f8ec00f65006" (UID: "722f6d8a-3c97-4061-b26a-f8ec00f65006"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.101327 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wfh5\" (UniqueName: \"kubernetes.io/projected/722f6d8a-3c97-4061-b26a-f8ec00f65006-kube-api-access-8wfh5\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.101352 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.101364 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.101372 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722f6d8a-3c97-4061-b26a-f8ec00f65006-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.101439 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "722f6d8a-3c97-4061-b26a-f8ec00f65006" (UID: "722f6d8a-3c97-4061-b26a-f8ec00f65006"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.150799 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "722f6d8a-3c97-4061-b26a-f8ec00f65006" (UID: "722f6d8a-3c97-4061-b26a-f8ec00f65006"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.203069 4845 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.203094 4845 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/722f6d8a-3c97-4061-b26a-f8ec00f65006-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.805688 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"813ec32b-5cd3-491d-85ac-bcf0140d0a8f","Type":"ContainerStarted","Data":"da85dccaa6e812287f989d1879096e61fb210356881e2cf93d8e838b865daec8"} Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.811669 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"722f6d8a-3c97-4061-b26a-f8ec00f65006","Type":"ContainerDied","Data":"33b69e32755cf4bbe30b319fc4768886cb90925b277cb1bb998366cf1a27877f"} Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.811740 4845 scope.go:117] "RemoveContainer" containerID="a76cab8c49a716e1851e502e56d3f3205dc6fe1e003ead53af25ac18bdab30b7" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.811916 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.866387 4845 generic.go:334] "Generic (PLEG): container finished" podID="652d8576-912d-4384-b487-aa6b987b567f" containerID="444f6ac6315562315a504c0600f21ab36cd45bf454f4002a51c1be86e2c6f5bb" exitCode=0 Feb 02 10:56:28 crc kubenswrapper[4845]: I0202 10:56:28.866600 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"652d8576-912d-4384-b487-aa6b987b567f","Type":"ContainerDied","Data":"444f6ac6315562315a504c0600f21ab36cd45bf454f4002a51c1be86e2c6f5bb"} Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.013739 4845 scope.go:117] "RemoveContainer" containerID="64b410e414340d61d34d86ceee1eabcc7870da5967de863190b7a1e715af98f3" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.044838 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.060115 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.075314 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:29 crc kubenswrapper[4845]: E0202 10:56:29.076244 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerName="nova-api-log" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.076309 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerName="nova-api-log" Feb 02 10:56:29 crc kubenswrapper[4845]: E0202 10:56:29.076390 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerName="nova-api-api" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.076449 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerName="nova-api-api" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.076767 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerName="nova-api-api" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.077123 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" containerName="nova-api-log" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.077081 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.077780 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="652d8576-912d-4384-b487-aa6b987b567f" containerName="nova-scheduler-scheduler" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.078724 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.085323 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.085607 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.085816 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.103207 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.248491 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-combined-ca-bundle\") pod \"652d8576-912d-4384-b487-aa6b987b567f\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.248763 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnk77\" (UniqueName: \"kubernetes.io/projected/652d8576-912d-4384-b487-aa6b987b567f-kube-api-access-vnk77\") pod \"652d8576-912d-4384-b487-aa6b987b567f\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.248919 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-config-data\") pod \"652d8576-912d-4384-b487-aa6b987b567f\" (UID: \"652d8576-912d-4384-b487-aa6b987b567f\") " Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.249252 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.249434 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-config-data\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.249661 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.249710 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-569rd\" (UniqueName: \"kubernetes.io/projected/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-kube-api-access-569rd\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.249897 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-logs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.249941 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-public-tls-certs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.258447 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652d8576-912d-4384-b487-aa6b987b567f-kube-api-access-vnk77" (OuterVolumeSpecName: "kube-api-access-vnk77") pod "652d8576-912d-4384-b487-aa6b987b567f" (UID: "652d8576-912d-4384-b487-aa6b987b567f"). InnerVolumeSpecName "kube-api-access-vnk77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.286929 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-config-data" (OuterVolumeSpecName: "config-data") pod "652d8576-912d-4384-b487-aa6b987b567f" (UID: "652d8576-912d-4384-b487-aa6b987b567f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.288437 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "652d8576-912d-4384-b487-aa6b987b567f" (UID: "652d8576-912d-4384-b487-aa6b987b567f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352060 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352314 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-config-data\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352385 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352410 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-569rd\" (UniqueName: \"kubernetes.io/projected/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-kube-api-access-569rd\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352478 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-logs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352506 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-public-tls-certs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352600 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnk77\" (UniqueName: \"kubernetes.io/projected/652d8576-912d-4384-b487-aa6b987b567f-kube-api-access-vnk77\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352622 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.352634 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d8576-912d-4384-b487-aa6b987b567f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.353648 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-logs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.356129 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.356579 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-public-tls-certs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.357524 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.362708 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-config-data\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.371817 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-569rd\" (UniqueName: \"kubernetes.io/projected/953beda6-58f2-45c2-b34e-0cb7db2d3bf6-kube-api-access-569rd\") pod \"nova-api-0\" (UID: \"953beda6-58f2-45c2-b34e-0cb7db2d3bf6\") " pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.410152 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.732366 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722f6d8a-3c97-4061-b26a-f8ec00f65006" path="/var/lib/kubelet/pods/722f6d8a-3c97-4061-b26a-f8ec00f65006/volumes" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.884007 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"652d8576-912d-4384-b487-aa6b987b567f","Type":"ContainerDied","Data":"9d162ecc4fb3460600a969e51ca75c36688e2c03aaec73046e664f22f56be6a6"} Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.884028 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.884089 4845 scope.go:117] "RemoveContainer" containerID="444f6ac6315562315a504c0600f21ab36cd45bf454f4002a51c1be86e2c6f5bb" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.887102 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"813ec32b-5cd3-491d-85ac-bcf0140d0a8f","Type":"ContainerStarted","Data":"bde40b59bdf4479ccfb44f68da737645aab1a9c63a750d7f0ef0530fe8b00b04"} Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.911440 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.924913 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.949150 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:56:29 crc kubenswrapper[4845]: E0202 10:56:29.949925 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652d8576-912d-4384-b487-aa6b987b567f" containerName="nova-scheduler-scheduler" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.950008 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="652d8576-912d-4384-b487-aa6b987b567f" containerName="nova-scheduler-scheduler" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.951160 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.953488 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 10:56:29 crc kubenswrapper[4845]: I0202 10:56:29.981194 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.043511 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.071638 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.072156 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-config-data\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.072359 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xsgp\" (UniqueName: \"kubernetes.io/projected/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-kube-api-access-7xsgp\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.169892 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.249:8775/\": read tcp 10.217.0.2:58524->10.217.0.249:8775: read: connection reset by peer" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.169952 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.249:8775/\": read tcp 10.217.0.2:58528->10.217.0.249:8775: read: connection reset by peer" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.175515 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.175778 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-config-data\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.175834 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xsgp\" (UniqueName: \"kubernetes.io/projected/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-kube-api-access-7xsgp\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.181638 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-config-data\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.182399 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.200632 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xsgp\" (UniqueName: \"kubernetes.io/projected/b3eed39b-ccd7-4c3d-bbd8-6872503e1c60-kube-api-access-7xsgp\") pod \"nova-scheduler-0\" (UID: \"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60\") " pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.289190 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.766471 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.794431 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-combined-ca-bundle\") pod \"ab48fc91-e9f1-4362-8cb8-091846601a7e\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.847356 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab48fc91-e9f1-4362-8cb8-091846601a7e" (UID: "ab48fc91-e9f1-4362-8cb8-091846601a7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.875997 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.899365 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-nova-metadata-tls-certs\") pod \"ab48fc91-e9f1-4362-8cb8-091846601a7e\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.899672 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlh7l\" (UniqueName: \"kubernetes.io/projected/ab48fc91-e9f1-4362-8cb8-091846601a7e-kube-api-access-zlh7l\") pod \"ab48fc91-e9f1-4362-8cb8-091846601a7e\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.899729 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab48fc91-e9f1-4362-8cb8-091846601a7e-logs\") pod \"ab48fc91-e9f1-4362-8cb8-091846601a7e\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.899790 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-config-data\") pod \"ab48fc91-e9f1-4362-8cb8-091846601a7e\" (UID: \"ab48fc91-e9f1-4362-8cb8-091846601a7e\") " Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.900470 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.900676 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab48fc91-e9f1-4362-8cb8-091846601a7e-logs" (OuterVolumeSpecName: "logs") pod "ab48fc91-e9f1-4362-8cb8-091846601a7e" (UID: "ab48fc91-e9f1-4362-8cb8-091846601a7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.904485 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab48fc91-e9f1-4362-8cb8-091846601a7e-kube-api-access-zlh7l" (OuterVolumeSpecName: "kube-api-access-zlh7l") pod "ab48fc91-e9f1-4362-8cb8-091846601a7e" (UID: "ab48fc91-e9f1-4362-8cb8-091846601a7e"). InnerVolumeSpecName "kube-api-access-zlh7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.938349 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"813ec32b-5cd3-491d-85ac-bcf0140d0a8f","Type":"ContainerStarted","Data":"a4b5e74bdb454e472c786f4693fd9b9111aaef9ac14fd8f77691e8053e36adae"} Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.946429 4845 generic.go:334] "Generic (PLEG): container finished" podID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerID="1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d" exitCode=0 Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.946497 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab48fc91-e9f1-4362-8cb8-091846601a7e","Type":"ContainerDied","Data":"1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d"} Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.946526 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab48fc91-e9f1-4362-8cb8-091846601a7e","Type":"ContainerDied","Data":"03916b40dcfb1d58e9a1856cf115e52af0968b63f0ef6c849ed5ef4cac5ddc0d"} Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.946544 4845 scope.go:117] "RemoveContainer" containerID="1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.946671 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.956145 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"953beda6-58f2-45c2-b34e-0cb7db2d3bf6","Type":"ContainerStarted","Data":"fe20f1006932a35e64b4cabe966be37477818387fbbdcf4c4237daac6a8ab17d"} Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.956198 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"953beda6-58f2-45c2-b34e-0cb7db2d3bf6","Type":"ContainerStarted","Data":"e749a478c4e428d3146f39065a93a280166d6689eb0197b0e6f5762e7e29c2d0"} Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.956217 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"953beda6-58f2-45c2-b34e-0cb7db2d3bf6","Type":"ContainerStarted","Data":"91e099fe3253140f734d5bfb54b7af2f37d445390a9f68543fb5261b3d4e23bb"} Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.968012 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60","Type":"ContainerStarted","Data":"75337218b7cb847596bd50c9548b2fcbaf05e00356b9ac1ca423ad46c58a0121"} Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.995397 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-config-data" (OuterVolumeSpecName: "config-data") pod "ab48fc91-e9f1-4362-8cb8-091846601a7e" (UID: "ab48fc91-e9f1-4362-8cb8-091846601a7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:30 crc kubenswrapper[4845]: I0202 10:56:30.995553 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.995534993 podStartE2EDuration="2.995534993s" podCreationTimestamp="2026-02-02 10:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:30.983995497 +0000 UTC m=+1472.075396947" watchObservedRunningTime="2026-02-02 10:56:30.995534993 +0000 UTC m=+1472.086936443" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.003367 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlh7l\" (UniqueName: \"kubernetes.io/projected/ab48fc91-e9f1-4362-8cb8-091846601a7e-kube-api-access-zlh7l\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.003414 4845 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab48fc91-e9f1-4362-8cb8-091846601a7e-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.003427 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.008433 4845 scope.go:117] "RemoveContainer" containerID="d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.024191 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ab48fc91-e9f1-4362-8cb8-091846601a7e" (UID: "ab48fc91-e9f1-4362-8cb8-091846601a7e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.044213 4845 scope.go:117] "RemoveContainer" containerID="1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d" Feb 02 10:56:31 crc kubenswrapper[4845]: E0202 10:56:31.044994 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d\": container with ID starting with 1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d not found: ID does not exist" containerID="1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.045051 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d"} err="failed to get container status \"1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d\": rpc error: code = NotFound desc = could not find container \"1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d\": container with ID starting with 1179e5d38c7711858861d519e333bd2170cf5cc0718d6497caa7d2116458f55d not found: ID does not exist" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.045076 4845 scope.go:117] "RemoveContainer" containerID="d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79" Feb 02 10:56:31 crc kubenswrapper[4845]: E0202 10:56:31.045722 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79\": container with ID starting with d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79 not found: ID does not exist" containerID="d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.045826 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79"} err="failed to get container status \"d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79\": rpc error: code = NotFound desc = could not find container \"d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79\": container with ID starting with d699d40e07287b19d500020a4c6395318b1aca063b567a81959a1c61c06a6c79 not found: ID does not exist" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.106554 4845 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab48fc91-e9f1-4362-8cb8-091846601a7e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.336860 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.383113 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.395260 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:56:31 crc kubenswrapper[4845]: E0202 10:56:31.395748 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-log" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.395790 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-log" Feb 02 10:56:31 crc kubenswrapper[4845]: E0202 10:56:31.395824 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-metadata" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.395831 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-metadata" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.396061 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-log" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.396089 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" containerName="nova-metadata-metadata" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.397558 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.404662 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.406346 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.408631 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.448167 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-config-data\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.448226 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-logs\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.448306 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.449391 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.449716 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cvqp\" (UniqueName: \"kubernetes.io/projected/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-kube-api-access-2cvqp\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.552075 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.552194 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cvqp\" (UniqueName: \"kubernetes.io/projected/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-kube-api-access-2cvqp\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.552256 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-config-data\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.552280 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-logs\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.552330 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.554322 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-logs\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.558304 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.560280 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-config-data\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.561246 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.576556 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cvqp\" (UniqueName: \"kubernetes.io/projected/12adbd4d-efe1-4549-bcac-f2b5f14f18b9-kube-api-access-2cvqp\") pod \"nova-metadata-0\" (UID: \"12adbd4d-efe1-4549-bcac-f2b5f14f18b9\") " pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.719276 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.728255 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652d8576-912d-4384-b487-aa6b987b567f" path="/var/lib/kubelet/pods/652d8576-912d-4384-b487-aa6b987b567f/volumes" Feb 02 10:56:31 crc kubenswrapper[4845]: I0202 10:56:31.728855 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab48fc91-e9f1-4362-8cb8-091846601a7e" path="/var/lib/kubelet/pods/ab48fc91-e9f1-4362-8cb8-091846601a7e/volumes" Feb 02 10:56:32 crc kubenswrapper[4845]: I0202 10:56:32.037447 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b3eed39b-ccd7-4c3d-bbd8-6872503e1c60","Type":"ContainerStarted","Data":"396b0203d72671aa7e4b87c3146da1335234c9ff74aba0fb3c6f0763fb91e65e"} Feb 02 10:56:32 crc kubenswrapper[4845]: I0202 10:56:32.096570 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.096540205 podStartE2EDuration="3.096540205s" podCreationTimestamp="2026-02-02 10:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:32.084394071 +0000 UTC m=+1473.175795521" watchObservedRunningTime="2026-02-02 10:56:32.096540205 +0000 UTC m=+1473.187941655" Feb 02 10:56:32 crc kubenswrapper[4845]: W0202 10:56:32.483550 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12adbd4d_efe1_4549_bcac_f2b5f14f18b9.slice/crio-bd87547cfe087444a3db7a133aef38b2f4c42f0b9d61af93217c2519a0a041e8 WatchSource:0}: Error finding container bd87547cfe087444a3db7a133aef38b2f4c42f0b9d61af93217c2519a0a041e8: Status 404 returned error can't find the container with id bd87547cfe087444a3db7a133aef38b2f4c42f0b9d61af93217c2519a0a041e8 Feb 02 10:56:32 crc kubenswrapper[4845]: I0202 10:56:32.485594 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:56:33 crc kubenswrapper[4845]: I0202 10:56:33.073397 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"813ec32b-5cd3-491d-85ac-bcf0140d0a8f","Type":"ContainerStarted","Data":"cfcdefd905ead75f76a06a4e18b3cf15176142f0f95015876c35874ddd232f1a"} Feb 02 10:56:33 crc kubenswrapper[4845]: I0202 10:56:33.074225 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:56:33 crc kubenswrapper[4845]: I0202 10:56:33.086150 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12adbd4d-efe1-4549-bcac-f2b5f14f18b9","Type":"ContainerStarted","Data":"65475c23e9519977be49da9ac78ac49a2f12e0028a8de94546f0e723dfe9239d"} Feb 02 10:56:33 crc kubenswrapper[4845]: I0202 10:56:33.086223 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12adbd4d-efe1-4549-bcac-f2b5f14f18b9","Type":"ContainerStarted","Data":"06489d103c7aad5e83dad1f6f603781de226b64d0f14299b12a2fcc051687cc2"} Feb 02 10:56:33 crc kubenswrapper[4845]: I0202 10:56:33.086238 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12adbd4d-efe1-4549-bcac-f2b5f14f18b9","Type":"ContainerStarted","Data":"bd87547cfe087444a3db7a133aef38b2f4c42f0b9d61af93217c2519a0a041e8"} Feb 02 10:56:33 crc kubenswrapper[4845]: I0202 10:56:33.110696 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.957849022 podStartE2EDuration="7.110665434s" podCreationTimestamp="2026-02-02 10:56:26 +0000 UTC" firstStartedPulling="2026-02-02 10:56:27.682661301 +0000 UTC m=+1468.774062741" lastFinishedPulling="2026-02-02 10:56:31.835477703 +0000 UTC m=+1472.926879153" observedRunningTime="2026-02-02 10:56:33.099467947 +0000 UTC m=+1474.190869397" watchObservedRunningTime="2026-02-02 10:56:33.110665434 +0000 UTC m=+1474.202066884" Feb 02 10:56:33 crc kubenswrapper[4845]: I0202 10:56:33.135264 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.13523177 podStartE2EDuration="2.13523177s" podCreationTimestamp="2026-02-02 10:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:33.130651766 +0000 UTC m=+1474.222053216" watchObservedRunningTime="2026-02-02 10:56:33.13523177 +0000 UTC m=+1474.226633560" Feb 02 10:56:35 crc kubenswrapper[4845]: I0202 10:56:35.290608 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 10:56:36 crc kubenswrapper[4845]: I0202 10:56:36.719732 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:56:36 crc kubenswrapper[4845]: I0202 10:56:36.719800 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:56:39 crc kubenswrapper[4845]: I0202 10:56:39.412012 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:56:39 crc kubenswrapper[4845]: I0202 10:56:39.412566 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:56:40 crc kubenswrapper[4845]: I0202 10:56:40.290468 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 10:56:40 crc kubenswrapper[4845]: I0202 10:56:40.325605 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 10:56:40 crc kubenswrapper[4845]: I0202 10:56:40.420279 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="953beda6-58f2-45c2-b34e-0cb7db2d3bf6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 10:56:40 crc kubenswrapper[4845]: I0202 10:56:40.420281 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="953beda6-58f2-45c2-b34e-0cb7db2d3bf6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 10:56:41 crc kubenswrapper[4845]: I0202 10:56:41.348594 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 10:56:41 crc kubenswrapper[4845]: I0202 10:56:41.745332 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 10:56:41 crc kubenswrapper[4845]: I0202 10:56:41.745378 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 10:56:42 crc kubenswrapper[4845]: I0202 10:56:42.573151 4845 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c8162bea-daa1-42e3-8921-3c12ad56dfa6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.250:3000/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:56:42 crc kubenswrapper[4845]: I0202 10:56:42.732129 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="12adbd4d-efe1-4549-bcac-f2b5f14f18b9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 10:56:42 crc kubenswrapper[4845]: I0202 10:56:42.732127 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="12adbd4d-efe1-4549-bcac-f2b5f14f18b9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 10:56:46 crc kubenswrapper[4845]: I0202 10:56:46.237160 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:56:46 crc kubenswrapper[4845]: I0202 10:56:46.237842 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:56:49 crc kubenswrapper[4845]: I0202 10:56:49.417778 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 10:56:49 crc kubenswrapper[4845]: I0202 10:56:49.418644 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 10:56:49 crc kubenswrapper[4845]: I0202 10:56:49.425695 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 10:56:49 crc kubenswrapper[4845]: I0202 10:56:49.431033 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 10:56:50 crc kubenswrapper[4845]: I0202 10:56:50.297855 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 10:56:50 crc kubenswrapper[4845]: I0202 10:56:50.304482 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 10:56:51 crc kubenswrapper[4845]: I0202 10:56:51.727875 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 10:56:51 crc kubenswrapper[4845]: I0202 10:56:51.735499 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 10:56:51 crc kubenswrapper[4845]: I0202 10:56:51.736958 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 10:56:52 crc kubenswrapper[4845]: I0202 10:56:52.321742 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 10:56:57 crc kubenswrapper[4845]: I0202 10:56:57.098507 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 10:57:02 crc kubenswrapper[4845]: I0202 10:57:02.881242 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fqkz8"] Feb 02 10:57:02 crc kubenswrapper[4845]: I0202 10:57:02.886660 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:02 crc kubenswrapper[4845]: I0202 10:57:02.935279 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fqkz8"] Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.009721 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-catalog-content\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.009908 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-utilities\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.010039 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8b6p\" (UniqueName: \"kubernetes.io/projected/21872d04-38b8-449f-a478-0f534e3632e0-kube-api-access-j8b6p\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.112668 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-utilities\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.112844 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8b6p\" (UniqueName: \"kubernetes.io/projected/21872d04-38b8-449f-a478-0f534e3632e0-kube-api-access-j8b6p\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.113057 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-catalog-content\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.113961 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-catalog-content\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.114018 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-utilities\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.142955 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8b6p\" (UniqueName: \"kubernetes.io/projected/21872d04-38b8-449f-a478-0f534e3632e0-kube-api-access-j8b6p\") pod \"redhat-operators-fqkz8\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.215946 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:03 crc kubenswrapper[4845]: I0202 10:57:03.776545 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fqkz8"] Feb 02 10:57:04 crc kubenswrapper[4845]: I0202 10:57:04.480175 4845 generic.go:334] "Generic (PLEG): container finished" podID="21872d04-38b8-449f-a478-0f534e3632e0" containerID="173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55" exitCode=0 Feb 02 10:57:04 crc kubenswrapper[4845]: I0202 10:57:04.480467 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqkz8" event={"ID":"21872d04-38b8-449f-a478-0f534e3632e0","Type":"ContainerDied","Data":"173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55"} Feb 02 10:57:04 crc kubenswrapper[4845]: I0202 10:57:04.480497 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqkz8" event={"ID":"21872d04-38b8-449f-a478-0f534e3632e0","Type":"ContainerStarted","Data":"79b771075a004dbc2e6ac2bf2681317d5263184874aaff3482381deacd0ed3a1"} Feb 02 10:57:04 crc kubenswrapper[4845]: I0202 10:57:04.483687 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:57:05 crc kubenswrapper[4845]: I0202 10:57:05.496417 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqkz8" event={"ID":"21872d04-38b8-449f-a478-0f534e3632e0","Type":"ContainerStarted","Data":"9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c"} Feb 02 10:57:11 crc kubenswrapper[4845]: I0202 10:57:11.584965 4845 generic.go:334] "Generic (PLEG): container finished" podID="21872d04-38b8-449f-a478-0f534e3632e0" containerID="9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c" exitCode=0 Feb 02 10:57:11 crc kubenswrapper[4845]: I0202 10:57:11.585014 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqkz8" event={"ID":"21872d04-38b8-449f-a478-0f534e3632e0","Type":"ContainerDied","Data":"9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c"} Feb 02 10:57:12 crc kubenswrapper[4845]: I0202 10:57:12.601058 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqkz8" event={"ID":"21872d04-38b8-449f-a478-0f534e3632e0","Type":"ContainerStarted","Data":"b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c"} Feb 02 10:57:12 crc kubenswrapper[4845]: I0202 10:57:12.628410 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fqkz8" podStartSLOduration=3.096168214 podStartE2EDuration="10.628369088s" podCreationTimestamp="2026-02-02 10:57:02 +0000 UTC" firstStartedPulling="2026-02-02 10:57:04.483378637 +0000 UTC m=+1505.574780087" lastFinishedPulling="2026-02-02 10:57:12.015579491 +0000 UTC m=+1513.106980961" observedRunningTime="2026-02-02 10:57:12.623837776 +0000 UTC m=+1513.715239236" watchObservedRunningTime="2026-02-02 10:57:12.628369088 +0000 UTC m=+1513.719770538" Feb 02 10:57:13 crc kubenswrapper[4845]: I0202 10:57:13.216598 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:13 crc kubenswrapper[4845]: I0202 10:57:13.216642 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:14 crc kubenswrapper[4845]: I0202 10:57:14.273740 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fqkz8" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="registry-server" probeResult="failure" output=< Feb 02 10:57:14 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 10:57:14 crc kubenswrapper[4845]: > Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.237953 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.238351 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.238422 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.239756 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.239869 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" gracePeriod=600 Feb 02 10:57:16 crc kubenswrapper[4845]: E0202 10:57:16.364419 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.653734 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" exitCode=0 Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.653847 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020"} Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.654110 4845 scope.go:117] "RemoveContainer" containerID="6667d6885fd474a5baafce195af3c9008051b075b4b764b236fc396ff08f675c" Feb 02 10:57:16 crc kubenswrapper[4845]: I0202 10:57:16.654857 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:57:16 crc kubenswrapper[4845]: E0202 10:57:16.655245 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:57:23 crc kubenswrapper[4845]: I0202 10:57:23.277400 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:23 crc kubenswrapper[4845]: I0202 10:57:23.337689 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:23 crc kubenswrapper[4845]: I0202 10:57:23.519462 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fqkz8"] Feb 02 10:57:24 crc kubenswrapper[4845]: I0202 10:57:24.753131 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fqkz8" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="registry-server" containerID="cri-o://b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c" gracePeriod=2 Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.544771 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.627453 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-catalog-content\") pod \"21872d04-38b8-449f-a478-0f534e3632e0\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.627541 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8b6p\" (UniqueName: \"kubernetes.io/projected/21872d04-38b8-449f-a478-0f534e3632e0-kube-api-access-j8b6p\") pod \"21872d04-38b8-449f-a478-0f534e3632e0\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.627714 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-utilities\") pod \"21872d04-38b8-449f-a478-0f534e3632e0\" (UID: \"21872d04-38b8-449f-a478-0f534e3632e0\") " Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.628816 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-utilities" (OuterVolumeSpecName: "utilities") pod "21872d04-38b8-449f-a478-0f534e3632e0" (UID: "21872d04-38b8-449f-a478-0f534e3632e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.650412 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21872d04-38b8-449f-a478-0f534e3632e0-kube-api-access-j8b6p" (OuterVolumeSpecName: "kube-api-access-j8b6p") pod "21872d04-38b8-449f-a478-0f534e3632e0" (UID: "21872d04-38b8-449f-a478-0f534e3632e0"). InnerVolumeSpecName "kube-api-access-j8b6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.732015 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8b6p\" (UniqueName: \"kubernetes.io/projected/21872d04-38b8-449f-a478-0f534e3632e0-kube-api-access-j8b6p\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.732050 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.760401 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21872d04-38b8-449f-a478-0f534e3632e0" (UID: "21872d04-38b8-449f-a478-0f534e3632e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.771700 4845 generic.go:334] "Generic (PLEG): container finished" podID="21872d04-38b8-449f-a478-0f534e3632e0" containerID="b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c" exitCode=0 Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.771791 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqkz8" event={"ID":"21872d04-38b8-449f-a478-0f534e3632e0","Type":"ContainerDied","Data":"b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c"} Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.771815 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fqkz8" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.771857 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fqkz8" event={"ID":"21872d04-38b8-449f-a478-0f534e3632e0","Type":"ContainerDied","Data":"79b771075a004dbc2e6ac2bf2681317d5263184874aaff3482381deacd0ed3a1"} Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.771892 4845 scope.go:117] "RemoveContainer" containerID="b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.820767 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fqkz8"] Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.826404 4845 scope.go:117] "RemoveContainer" containerID="9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.832488 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fqkz8"] Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.834040 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21872d04-38b8-449f-a478-0f534e3632e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.895087 4845 scope.go:117] "RemoveContainer" containerID="173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.929171 4845 scope.go:117] "RemoveContainer" containerID="b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c" Feb 02 10:57:25 crc kubenswrapper[4845]: E0202 10:57:25.931415 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c\": container with ID starting with b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c not found: ID does not exist" containerID="b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.931527 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c"} err="failed to get container status \"b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c\": rpc error: code = NotFound desc = could not find container \"b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c\": container with ID starting with b567c51caf362bac80b3b2be5b2875445ac9c9e670a723c3644ebd9069cd6d5c not found: ID does not exist" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.931578 4845 scope.go:117] "RemoveContainer" containerID="9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c" Feb 02 10:57:25 crc kubenswrapper[4845]: E0202 10:57:25.932494 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c\": container with ID starting with 9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c not found: ID does not exist" containerID="9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.932546 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c"} err="failed to get container status \"9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c\": rpc error: code = NotFound desc = could not find container \"9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c\": container with ID starting with 9096f25597271a6576a0e09c4f4b246d4881d1a75f97e438185515d55a3a157c not found: ID does not exist" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.932580 4845 scope.go:117] "RemoveContainer" containerID="173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55" Feb 02 10:57:25 crc kubenswrapper[4845]: E0202 10:57:25.933255 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55\": container with ID starting with 173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55 not found: ID does not exist" containerID="173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55" Feb 02 10:57:25 crc kubenswrapper[4845]: I0202 10:57:25.933336 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55"} err="failed to get container status \"173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55\": rpc error: code = NotFound desc = could not find container \"173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55\": container with ID starting with 173a2a3b926b1d64073b7d4fd9a488aa0a3d36c047eaaf17d97e9644afaf5d55 not found: ID does not exist" Feb 02 10:57:27 crc kubenswrapper[4845]: I0202 10:57:27.725664 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21872d04-38b8-449f-a478-0f534e3632e0" path="/var/lib/kubelet/pods/21872d04-38b8-449f-a478-0f534e3632e0/volumes" Feb 02 10:57:29 crc kubenswrapper[4845]: I0202 10:57:29.721672 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:57:29 crc kubenswrapper[4845]: E0202 10:57:29.722322 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:57:42 crc kubenswrapper[4845]: I0202 10:57:42.714824 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:57:42 crc kubenswrapper[4845]: E0202 10:57:42.718289 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:57:53 crc kubenswrapper[4845]: I0202 10:57:53.713621 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:57:53 crc kubenswrapper[4845]: E0202 10:57:53.714448 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.473124 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c8c2s"] Feb 02 10:57:56 crc kubenswrapper[4845]: E0202 10:57:56.473959 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="registry-server" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.473973 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="registry-server" Feb 02 10:57:56 crc kubenswrapper[4845]: E0202 10:57:56.473991 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="extract-utilities" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.473998 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="extract-utilities" Feb 02 10:57:56 crc kubenswrapper[4845]: E0202 10:57:56.474031 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="extract-content" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.474038 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="extract-content" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.474275 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="21872d04-38b8-449f-a478-0f534e3632e0" containerName="registry-server" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.476191 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.488721 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8c2s"] Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.525470 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-catalog-content\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.525587 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnzkz\" (UniqueName: \"kubernetes.io/projected/8d22eac5-ebbe-4f96-b316-2f8f285e525d-kube-api-access-nnzkz\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.525741 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-utilities\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.628207 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-utilities\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.628375 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-catalog-content\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.628405 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnzkz\" (UniqueName: \"kubernetes.io/projected/8d22eac5-ebbe-4f96-b316-2f8f285e525d-kube-api-access-nnzkz\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.628752 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-utilities\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.628878 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-catalog-content\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.654452 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnzkz\" (UniqueName: \"kubernetes.io/projected/8d22eac5-ebbe-4f96-b316-2f8f285e525d-kube-api-access-nnzkz\") pod \"community-operators-c8c2s\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:56 crc kubenswrapper[4845]: I0202 10:57:56.801686 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:57:57 crc kubenswrapper[4845]: I0202 10:57:57.321653 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8c2s"] Feb 02 10:57:58 crc kubenswrapper[4845]: I0202 10:57:58.167101 4845 generic.go:334] "Generic (PLEG): container finished" podID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerID="337950f222b414591109cc0dc956079dd3cdadf690a2d7fa356d1e8cbf7a3dfa" exitCode=0 Feb 02 10:57:58 crc kubenswrapper[4845]: I0202 10:57:58.167383 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8c2s" event={"ID":"8d22eac5-ebbe-4f96-b316-2f8f285e525d","Type":"ContainerDied","Data":"337950f222b414591109cc0dc956079dd3cdadf690a2d7fa356d1e8cbf7a3dfa"} Feb 02 10:57:58 crc kubenswrapper[4845]: I0202 10:57:58.167706 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8c2s" event={"ID":"8d22eac5-ebbe-4f96-b316-2f8f285e525d","Type":"ContainerStarted","Data":"3ae1282b014c1aefd636e6ea9e87b1d088db9865278bfa8c37390fd589a8e356"} Feb 02 10:57:59 crc kubenswrapper[4845]: I0202 10:57:59.192129 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8c2s" event={"ID":"8d22eac5-ebbe-4f96-b316-2f8f285e525d","Type":"ContainerStarted","Data":"5cde5d2d272073d7cd8963b7aae94db410880e27625574112c41080117c9fe2a"} Feb 02 10:58:01 crc kubenswrapper[4845]: I0202 10:58:01.220839 4845 generic.go:334] "Generic (PLEG): container finished" podID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerID="5cde5d2d272073d7cd8963b7aae94db410880e27625574112c41080117c9fe2a" exitCode=0 Feb 02 10:58:01 crc kubenswrapper[4845]: I0202 10:58:01.220914 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8c2s" event={"ID":"8d22eac5-ebbe-4f96-b316-2f8f285e525d","Type":"ContainerDied","Data":"5cde5d2d272073d7cd8963b7aae94db410880e27625574112c41080117c9fe2a"} Feb 02 10:58:02 crc kubenswrapper[4845]: I0202 10:58:02.233601 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8c2s" event={"ID":"8d22eac5-ebbe-4f96-b316-2f8f285e525d","Type":"ContainerStarted","Data":"5f2b7bb4369358a8d7e71aa3e4668d1ab3ee39e4a38064bf874407ad021a9394"} Feb 02 10:58:02 crc kubenswrapper[4845]: I0202 10:58:02.266002 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c8c2s" podStartSLOduration=2.807313402 podStartE2EDuration="6.265981146s" podCreationTimestamp="2026-02-02 10:57:56 +0000 UTC" firstStartedPulling="2026-02-02 10:57:58.1699929 +0000 UTC m=+1559.261394350" lastFinishedPulling="2026-02-02 10:58:01.628660644 +0000 UTC m=+1562.720062094" observedRunningTime="2026-02-02 10:58:02.252880804 +0000 UTC m=+1563.344282264" watchObservedRunningTime="2026-02-02 10:58:02.265981146 +0000 UTC m=+1563.357382606" Feb 02 10:58:06 crc kubenswrapper[4845]: I0202 10:58:06.717668 4845 scope.go:117] "RemoveContainer" containerID="13f6f84ab8aba03eaa86df418135b90d82987c32100a7069900fe6528abad51b" Feb 02 10:58:06 crc kubenswrapper[4845]: I0202 10:58:06.754683 4845 scope.go:117] "RemoveContainer" containerID="2fbcebcb43f4c40bf6cc312cfba0d8248bb4757b182770fa199288683be575a0" Feb 02 10:58:06 crc kubenswrapper[4845]: I0202 10:58:06.802794 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:58:06 crc kubenswrapper[4845]: I0202 10:58:06.802916 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:58:06 crc kubenswrapper[4845]: I0202 10:58:06.870343 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:58:06 crc kubenswrapper[4845]: I0202 10:58:06.877995 4845 scope.go:117] "RemoveContainer" containerID="9249fd2f5427ff24d36b33c0b16d83ec165b6227454b663781f59d842989c2da" Feb 02 10:58:07 crc kubenswrapper[4845]: I0202 10:58:07.336907 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:58:07 crc kubenswrapper[4845]: I0202 10:58:07.401277 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8c2s"] Feb 02 10:58:08 crc kubenswrapper[4845]: I0202 10:58:08.713174 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:58:08 crc kubenswrapper[4845]: E0202 10:58:08.713797 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:58:09 crc kubenswrapper[4845]: I0202 10:58:09.310296 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c8c2s" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerName="registry-server" containerID="cri-o://5f2b7bb4369358a8d7e71aa3e4668d1ab3ee39e4a38064bf874407ad021a9394" gracePeriod=2 Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.337637 4845 generic.go:334] "Generic (PLEG): container finished" podID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerID="5f2b7bb4369358a8d7e71aa3e4668d1ab3ee39e4a38064bf874407ad021a9394" exitCode=0 Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.337701 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8c2s" event={"ID":"8d22eac5-ebbe-4f96-b316-2f8f285e525d","Type":"ContainerDied","Data":"5f2b7bb4369358a8d7e71aa3e4668d1ab3ee39e4a38064bf874407ad021a9394"} Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.531413 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.724215 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnzkz\" (UniqueName: \"kubernetes.io/projected/8d22eac5-ebbe-4f96-b316-2f8f285e525d-kube-api-access-nnzkz\") pod \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.724780 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-utilities\") pod \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.725208 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-catalog-content\") pod \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\" (UID: \"8d22eac5-ebbe-4f96-b316-2f8f285e525d\") " Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.725867 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-utilities" (OuterVolumeSpecName: "utilities") pod "8d22eac5-ebbe-4f96-b316-2f8f285e525d" (UID: "8d22eac5-ebbe-4f96-b316-2f8f285e525d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.729130 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.734383 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d22eac5-ebbe-4f96-b316-2f8f285e525d-kube-api-access-nnzkz" (OuterVolumeSpecName: "kube-api-access-nnzkz") pod "8d22eac5-ebbe-4f96-b316-2f8f285e525d" (UID: "8d22eac5-ebbe-4f96-b316-2f8f285e525d"). InnerVolumeSpecName "kube-api-access-nnzkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.787305 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d22eac5-ebbe-4f96-b316-2f8f285e525d" (UID: "8d22eac5-ebbe-4f96-b316-2f8f285e525d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.832985 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d22eac5-ebbe-4f96-b316-2f8f285e525d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:10 crc kubenswrapper[4845]: I0202 10:58:10.833024 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnzkz\" (UniqueName: \"kubernetes.io/projected/8d22eac5-ebbe-4f96-b316-2f8f285e525d-kube-api-access-nnzkz\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4845]: I0202 10:58:11.354823 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8c2s" event={"ID":"8d22eac5-ebbe-4f96-b316-2f8f285e525d","Type":"ContainerDied","Data":"3ae1282b014c1aefd636e6ea9e87b1d088db9865278bfa8c37390fd589a8e356"} Feb 02 10:58:11 crc kubenswrapper[4845]: I0202 10:58:11.354968 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8c2s" Feb 02 10:58:11 crc kubenswrapper[4845]: I0202 10:58:11.355227 4845 scope.go:117] "RemoveContainer" containerID="5f2b7bb4369358a8d7e71aa3e4668d1ab3ee39e4a38064bf874407ad021a9394" Feb 02 10:58:11 crc kubenswrapper[4845]: I0202 10:58:11.391773 4845 scope.go:117] "RemoveContainer" containerID="5cde5d2d272073d7cd8963b7aae94db410880e27625574112c41080117c9fe2a" Feb 02 10:58:11 crc kubenswrapper[4845]: I0202 10:58:11.407913 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8c2s"] Feb 02 10:58:11 crc kubenswrapper[4845]: I0202 10:58:11.423983 4845 scope.go:117] "RemoveContainer" containerID="337950f222b414591109cc0dc956079dd3cdadf690a2d7fa356d1e8cbf7a3dfa" Feb 02 10:58:11 crc kubenswrapper[4845]: I0202 10:58:11.426521 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c8c2s"] Feb 02 10:58:11 crc kubenswrapper[4845]: I0202 10:58:11.726218 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" path="/var/lib/kubelet/pods/8d22eac5-ebbe-4f96-b316-2f8f285e525d/volumes" Feb 02 10:58:20 crc kubenswrapper[4845]: I0202 10:58:20.712318 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:58:20 crc kubenswrapper[4845]: E0202 10:58:20.713361 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:58:31 crc kubenswrapper[4845]: I0202 10:58:31.712762 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:58:31 crc kubenswrapper[4845]: E0202 10:58:31.713664 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:58:45 crc kubenswrapper[4845]: I0202 10:58:45.714335 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:58:45 crc kubenswrapper[4845]: E0202 10:58:45.715955 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:58:56 crc kubenswrapper[4845]: I0202 10:58:56.712815 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:58:56 crc kubenswrapper[4845]: E0202 10:58:56.713958 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:59:06 crc kubenswrapper[4845]: I0202 10:59:06.992479 4845 scope.go:117] "RemoveContainer" containerID="cc9a388001d07f3511088bc5b867a073370c23dfc566a76ed6b25957f4cd9611" Feb 02 10:59:07 crc kubenswrapper[4845]: I0202 10:59:07.045078 4845 scope.go:117] "RemoveContainer" containerID="423aeacd820ec5c2d675794591067804e90d1f2a6923ef3a9f13012b659813bc" Feb 02 10:59:07 crc kubenswrapper[4845]: I0202 10:59:07.092097 4845 scope.go:117] "RemoveContainer" containerID="1e24bbe2d8cd0583fc986cf6fd412f527daea58b4a85706eb389322bf6ad3af7" Feb 02 10:59:07 crc kubenswrapper[4845]: I0202 10:59:07.119485 4845 scope.go:117] "RemoveContainer" containerID="4c795b185512cbaa08a41087a290ab376088c31c6277e1a8f0ee3f21dc22200f" Feb 02 10:59:07 crc kubenswrapper[4845]: I0202 10:59:07.144907 4845 scope.go:117] "RemoveContainer" containerID="5c7e1e0f5ba6836be4b2cc0a23514474d1930ec71f3bf7b3e6b27bbccac7ee40" Feb 02 10:59:07 crc kubenswrapper[4845]: I0202 10:59:07.714745 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:59:07 crc kubenswrapper[4845]: E0202 10:59:07.715307 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:59:19 crc kubenswrapper[4845]: I0202 10:59:19.724029 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:59:19 crc kubenswrapper[4845]: E0202 10:59:19.724978 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:59:31 crc kubenswrapper[4845]: I0202 10:59:31.714417 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:59:31 crc kubenswrapper[4845]: E0202 10:59:31.715592 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:59:44 crc kubenswrapper[4845]: I0202 10:59:44.713413 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:59:44 crc kubenswrapper[4845]: E0202 10:59:44.714336 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 10:59:55 crc kubenswrapper[4845]: I0202 10:59:55.713355 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 10:59:55 crc kubenswrapper[4845]: E0202 10:59:55.714227 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.157869 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb"] Feb 02 11:00:00 crc kubenswrapper[4845]: E0202 11:00:00.159123 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerName="extract-content" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.159151 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerName="extract-content" Feb 02 11:00:00 crc kubenswrapper[4845]: E0202 11:00:00.159188 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerName="registry-server" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.159196 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerName="registry-server" Feb 02 11:00:00 crc kubenswrapper[4845]: E0202 11:00:00.159223 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerName="extract-utilities" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.159232 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerName="extract-utilities" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.159528 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d22eac5-ebbe-4f96-b316-2f8f285e525d" containerName="registry-server" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.160651 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.162866 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.163345 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.176169 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb"] Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.189349 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b535e9d-4510-4191-9ab5-768d449b7bc3-config-volume\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.189479 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szmg\" (UniqueName: \"kubernetes.io/projected/6b535e9d-4510-4191-9ab5-768d449b7bc3-kube-api-access-7szmg\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.189619 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b535e9d-4510-4191-9ab5-768d449b7bc3-secret-volume\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.292391 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b535e9d-4510-4191-9ab5-768d449b7bc3-config-volume\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.292475 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szmg\" (UniqueName: \"kubernetes.io/projected/6b535e9d-4510-4191-9ab5-768d449b7bc3-kube-api-access-7szmg\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.292573 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b535e9d-4510-4191-9ab5-768d449b7bc3-secret-volume\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.293783 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b535e9d-4510-4191-9ab5-768d449b7bc3-config-volume\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.308168 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b535e9d-4510-4191-9ab5-768d449b7bc3-secret-volume\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.309741 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szmg\" (UniqueName: \"kubernetes.io/projected/6b535e9d-4510-4191-9ab5-768d449b7bc3-kube-api-access-7szmg\") pod \"collect-profiles-29500500-g5gxb\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.480895 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:00 crc kubenswrapper[4845]: I0202 11:00:00.989264 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb"] Feb 02 11:00:01 crc kubenswrapper[4845]: I0202 11:00:01.690276 4845 generic.go:334] "Generic (PLEG): container finished" podID="6b535e9d-4510-4191-9ab5-768d449b7bc3" containerID="e8b882d2679f84fe117d5aa26326dd7526ca6a2f74a7144d1db5d6a93a707b5a" exitCode=0 Feb 02 11:00:01 crc kubenswrapper[4845]: I0202 11:00:01.690464 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" event={"ID":"6b535e9d-4510-4191-9ab5-768d449b7bc3","Type":"ContainerDied","Data":"e8b882d2679f84fe117d5aa26326dd7526ca6a2f74a7144d1db5d6a93a707b5a"} Feb 02 11:00:01 crc kubenswrapper[4845]: I0202 11:00:01.690768 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" event={"ID":"6b535e9d-4510-4191-9ab5-768d449b7bc3","Type":"ContainerStarted","Data":"bab954d8ae8de5cb4daf181036b811bdc367d81b9d5a806b6002f216de90b8f0"} Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.115082 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.262452 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7szmg\" (UniqueName: \"kubernetes.io/projected/6b535e9d-4510-4191-9ab5-768d449b7bc3-kube-api-access-7szmg\") pod \"6b535e9d-4510-4191-9ab5-768d449b7bc3\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.262565 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b535e9d-4510-4191-9ab5-768d449b7bc3-secret-volume\") pod \"6b535e9d-4510-4191-9ab5-768d449b7bc3\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.262663 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b535e9d-4510-4191-9ab5-768d449b7bc3-config-volume\") pod \"6b535e9d-4510-4191-9ab5-768d449b7bc3\" (UID: \"6b535e9d-4510-4191-9ab5-768d449b7bc3\") " Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.265319 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b535e9d-4510-4191-9ab5-768d449b7bc3-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b535e9d-4510-4191-9ab5-768d449b7bc3" (UID: "6b535e9d-4510-4191-9ab5-768d449b7bc3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.271057 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b535e9d-4510-4191-9ab5-768d449b7bc3-kube-api-access-7szmg" (OuterVolumeSpecName: "kube-api-access-7szmg") pod "6b535e9d-4510-4191-9ab5-768d449b7bc3" (UID: "6b535e9d-4510-4191-9ab5-768d449b7bc3"). InnerVolumeSpecName "kube-api-access-7szmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.271333 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b535e9d-4510-4191-9ab5-768d449b7bc3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b535e9d-4510-4191-9ab5-768d449b7bc3" (UID: "6b535e9d-4510-4191-9ab5-768d449b7bc3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.365958 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7szmg\" (UniqueName: \"kubernetes.io/projected/6b535e9d-4510-4191-9ab5-768d449b7bc3-kube-api-access-7szmg\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.365994 4845 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b535e9d-4510-4191-9ab5-768d449b7bc3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.366007 4845 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b535e9d-4510-4191-9ab5-768d449b7bc3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.713696 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.725342 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb" event={"ID":"6b535e9d-4510-4191-9ab5-768d449b7bc3","Type":"ContainerDied","Data":"bab954d8ae8de5cb4daf181036b811bdc367d81b9d5a806b6002f216de90b8f0"} Feb 02 11:00:03 crc kubenswrapper[4845]: I0202 11:00:03.725389 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bab954d8ae8de5cb4daf181036b811bdc367d81b9d5a806b6002f216de90b8f0" Feb 02 11:00:07 crc kubenswrapper[4845]: I0202 11:00:07.247748 4845 scope.go:117] "RemoveContainer" containerID="9eb0cc21db22e7b0a6e9194e496c67f804362485d557f665096269f5e604e637" Feb 02 11:00:07 crc kubenswrapper[4845]: I0202 11:00:07.279022 4845 scope.go:117] "RemoveContainer" containerID="7f93106b71edc6fc8f88297c4c620682fa2e4fe0213e9e1cad77a132eee7f48a" Feb 02 11:00:07 crc kubenswrapper[4845]: I0202 11:00:07.300698 4845 scope.go:117] "RemoveContainer" containerID="65f19a5e53861afad5be0dc22811e405a7089640d2466a967d0205688cbae9c3" Feb 02 11:00:07 crc kubenswrapper[4845]: I0202 11:00:07.326859 4845 scope.go:117] "RemoveContainer" containerID="1e8465847af87f5185fe9a371926a2dffc326ca101ce721ffdfe10ea1d45b3b5" Feb 02 11:00:10 crc kubenswrapper[4845]: I0202 11:00:10.714160 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:00:10 crc kubenswrapper[4845]: E0202 11:00:10.715082 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:00:21 crc kubenswrapper[4845]: I0202 11:00:21.713821 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:00:21 crc kubenswrapper[4845]: E0202 11:00:21.714581 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:00:36 crc kubenswrapper[4845]: I0202 11:00:36.712720 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:00:36 crc kubenswrapper[4845]: E0202 11:00:36.713870 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:00:48 crc kubenswrapper[4845]: I0202 11:00:48.713657 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:00:48 crc kubenswrapper[4845]: E0202 11:00:48.714567 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.171035 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500501-znk7q"] Feb 02 11:01:00 crc kubenswrapper[4845]: E0202 11:01:00.179644 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b535e9d-4510-4191-9ab5-768d449b7bc3" containerName="collect-profiles" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.180009 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b535e9d-4510-4191-9ab5-768d449b7bc3" containerName="collect-profiles" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.180433 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b535e9d-4510-4191-9ab5-768d449b7bc3" containerName="collect-profiles" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.181552 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.210135 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500501-znk7q"] Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.289317 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkts\" (UniqueName: \"kubernetes.io/projected/7303667b-89bb-4ad1-92a8-3c94525911d4-kube-api-access-rqkts\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.289603 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-combined-ca-bundle\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.289666 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-config-data\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.289769 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-fernet-keys\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.392710 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-config-data\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.392858 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-fernet-keys\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.392946 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkts\" (UniqueName: \"kubernetes.io/projected/7303667b-89bb-4ad1-92a8-3c94525911d4-kube-api-access-rqkts\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.393037 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-combined-ca-bundle\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.399712 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-fernet-keys\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.404496 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-config-data\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.404852 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-combined-ca-bundle\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.411760 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkts\" (UniqueName: \"kubernetes.io/projected/7303667b-89bb-4ad1-92a8-3c94525911d4-kube-api-access-rqkts\") pod \"keystone-cron-29500501-znk7q\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:00 crc kubenswrapper[4845]: I0202 11:01:00.507076 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:01 crc kubenswrapper[4845]: I0202 11:01:01.017401 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500501-znk7q"] Feb 02 11:01:01 crc kubenswrapper[4845]: W0202 11:01:01.023386 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7303667b_89bb_4ad1_92a8_3c94525911d4.slice/crio-9cb43210ee4d612528da59913bdb2ba664514714e9e2648d77d865df0d94f626 WatchSource:0}: Error finding container 9cb43210ee4d612528da59913bdb2ba664514714e9e2648d77d865df0d94f626: Status 404 returned error can't find the container with id 9cb43210ee4d612528da59913bdb2ba664514714e9e2648d77d865df0d94f626 Feb 02 11:01:01 crc kubenswrapper[4845]: I0202 11:01:01.413864 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-znk7q" event={"ID":"7303667b-89bb-4ad1-92a8-3c94525911d4","Type":"ContainerStarted","Data":"78e2917d9a9e1517466ea6ff81ddaca6478a9b6469927ef2ef8488e6ebc57d42"} Feb 02 11:01:01 crc kubenswrapper[4845]: I0202 11:01:01.414897 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-znk7q" event={"ID":"7303667b-89bb-4ad1-92a8-3c94525911d4","Type":"ContainerStarted","Data":"9cb43210ee4d612528da59913bdb2ba664514714e9e2648d77d865df0d94f626"} Feb 02 11:01:01 crc kubenswrapper[4845]: I0202 11:01:01.443906 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500501-znk7q" podStartSLOduration=1.443869482 podStartE2EDuration="1.443869482s" podCreationTimestamp="2026-02-02 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:01.434127462 +0000 UTC m=+1742.525528922" watchObservedRunningTime="2026-02-02 11:01:01.443869482 +0000 UTC m=+1742.535270932" Feb 02 11:01:01 crc kubenswrapper[4845]: I0202 11:01:01.713562 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:01:01 crc kubenswrapper[4845]: E0202 11:01:01.714247 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:01:05 crc kubenswrapper[4845]: I0202 11:01:05.456926 4845 generic.go:334] "Generic (PLEG): container finished" podID="7303667b-89bb-4ad1-92a8-3c94525911d4" containerID="78e2917d9a9e1517466ea6ff81ddaca6478a9b6469927ef2ef8488e6ebc57d42" exitCode=0 Feb 02 11:01:05 crc kubenswrapper[4845]: I0202 11:01:05.457046 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-znk7q" event={"ID":"7303667b-89bb-4ad1-92a8-3c94525911d4","Type":"ContainerDied","Data":"78e2917d9a9e1517466ea6ff81ddaca6478a9b6469927ef2ef8488e6ebc57d42"} Feb 02 11:01:06 crc kubenswrapper[4845]: I0202 11:01:06.901826 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:06 crc kubenswrapper[4845]: I0202 11:01:06.980528 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-config-data\") pod \"7303667b-89bb-4ad1-92a8-3c94525911d4\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " Feb 02 11:01:06 crc kubenswrapper[4845]: I0202 11:01:06.980822 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-combined-ca-bundle\") pod \"7303667b-89bb-4ad1-92a8-3c94525911d4\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " Feb 02 11:01:06 crc kubenswrapper[4845]: I0202 11:01:06.980988 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqkts\" (UniqueName: \"kubernetes.io/projected/7303667b-89bb-4ad1-92a8-3c94525911d4-kube-api-access-rqkts\") pod \"7303667b-89bb-4ad1-92a8-3c94525911d4\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " Feb 02 11:01:06 crc kubenswrapper[4845]: I0202 11:01:06.981168 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-fernet-keys\") pod \"7303667b-89bb-4ad1-92a8-3c94525911d4\" (UID: \"7303667b-89bb-4ad1-92a8-3c94525911d4\") " Feb 02 11:01:06 crc kubenswrapper[4845]: I0202 11:01:06.988874 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7303667b-89bb-4ad1-92a8-3c94525911d4" (UID: "7303667b-89bb-4ad1-92a8-3c94525911d4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:06 crc kubenswrapper[4845]: I0202 11:01:06.989922 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7303667b-89bb-4ad1-92a8-3c94525911d4-kube-api-access-rqkts" (OuterVolumeSpecName: "kube-api-access-rqkts") pod "7303667b-89bb-4ad1-92a8-3c94525911d4" (UID: "7303667b-89bb-4ad1-92a8-3c94525911d4"). InnerVolumeSpecName "kube-api-access-rqkts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.019816 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7303667b-89bb-4ad1-92a8-3c94525911d4" (UID: "7303667b-89bb-4ad1-92a8-3c94525911d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.050286 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-config-data" (OuterVolumeSpecName: "config-data") pod "7303667b-89bb-4ad1-92a8-3c94525911d4" (UID: "7303667b-89bb-4ad1-92a8-3c94525911d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.085279 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.085325 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.085344 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqkts\" (UniqueName: \"kubernetes.io/projected/7303667b-89bb-4ad1-92a8-3c94525911d4-kube-api-access-rqkts\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.085357 4845 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7303667b-89bb-4ad1-92a8-3c94525911d4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.444323 4845 scope.go:117] "RemoveContainer" containerID="f97ac7a8db772c50e60d2fbaebaa59d3f1748dd362df2a5683272ada45ea3c75" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.471897 4845 scope.go:117] "RemoveContainer" containerID="3ebbd0d0a7130e66b17b81004ea1929c3e2d40fef4c6767da3df2300291382c2" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.506493 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-znk7q" event={"ID":"7303667b-89bb-4ad1-92a8-3c94525911d4","Type":"ContainerDied","Data":"9cb43210ee4d612528da59913bdb2ba664514714e9e2648d77d865df0d94f626"} Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.506543 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cb43210ee4d612528da59913bdb2ba664514714e9e2648d77d865df0d94f626" Feb 02 11:01:07 crc kubenswrapper[4845]: I0202 11:01:07.506602 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-znk7q" Feb 02 11:01:13 crc kubenswrapper[4845]: I0202 11:01:13.712837 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:01:13 crc kubenswrapper[4845]: E0202 11:01:13.713696 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:01:28 crc kubenswrapper[4845]: I0202 11:01:28.714947 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:01:28 crc kubenswrapper[4845]: E0202 11:01:28.716226 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:01:32 crc kubenswrapper[4845]: I0202 11:01:32.070097 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6d20-account-create-update-8zrt2"] Feb 02 11:01:32 crc kubenswrapper[4845]: I0202 11:01:32.082281 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-q8crj"] Feb 02 11:01:32 crc kubenswrapper[4845]: I0202 11:01:32.096131 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6d20-account-create-update-8zrt2"] Feb 02 11:01:32 crc kubenswrapper[4845]: I0202 11:01:32.107498 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-q8crj"] Feb 02 11:01:33 crc kubenswrapper[4845]: I0202 11:01:33.037310 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hchq8"] Feb 02 11:01:33 crc kubenswrapper[4845]: I0202 11:01:33.064709 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-42da-account-create-update-dmqrb"] Feb 02 11:01:33 crc kubenswrapper[4845]: I0202 11:01:33.090738 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-42da-account-create-update-dmqrb"] Feb 02 11:01:33 crc kubenswrapper[4845]: I0202 11:01:33.102986 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hchq8"] Feb 02 11:01:33 crc kubenswrapper[4845]: I0202 11:01:33.725436 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8" path="/var/lib/kubelet/pods/05069a45-f3d6-43e9-bf29-2e3a3cbcc2d8/volumes" Feb 02 11:01:33 crc kubenswrapper[4845]: I0202 11:01:33.726193 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4db3a3-fdab-41f0-b675-26aaaa575769" path="/var/lib/kubelet/pods/1f4db3a3-fdab-41f0-b675-26aaaa575769/volumes" Feb 02 11:01:33 crc kubenswrapper[4845]: I0202 11:01:33.726853 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802ba94f-17f1-4eed-93aa-95e5ffe1ea43" path="/var/lib/kubelet/pods/802ba94f-17f1-4eed-93aa-95e5ffe1ea43/volumes" Feb 02 11:01:33 crc kubenswrapper[4845]: I0202 11:01:33.727590 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e" path="/var/lib/kubelet/pods/82c7eb05-f8ef-40a5-b799-af8bfdfd9c4e/volumes" Feb 02 11:01:34 crc kubenswrapper[4845]: I0202 11:01:34.039303 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-2fba-account-create-update-57wqb"] Feb 02 11:01:34 crc kubenswrapper[4845]: I0202 11:01:34.051147 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-2fba-account-create-update-57wqb"] Feb 02 11:01:35 crc kubenswrapper[4845]: I0202 11:01:35.029459 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-wlplx"] Feb 02 11:01:35 crc kubenswrapper[4845]: I0202 11:01:35.043364 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-wlplx"] Feb 02 11:01:35 crc kubenswrapper[4845]: I0202 11:01:35.728890 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc" path="/var/lib/kubelet/pods/8d4f7cb3-0991-4ce6-a69d-fd6f17bbc2fc/volumes" Feb 02 11:01:35 crc kubenswrapper[4845]: I0202 11:01:35.729581 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc354af6-cf06-4532-83c7-845e6f8f41c5" path="/var/lib/kubelet/pods/fc354af6-cf06-4532-83c7-845e6f8f41c5/volumes" Feb 02 11:01:39 crc kubenswrapper[4845]: I0202 11:01:39.036984 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4gj6v"] Feb 02 11:01:39 crc kubenswrapper[4845]: I0202 11:01:39.056067 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1e2c-account-create-update-jxgpc"] Feb 02 11:01:39 crc kubenswrapper[4845]: I0202 11:01:39.069613 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1e2c-account-create-update-jxgpc"] Feb 02 11:01:39 crc kubenswrapper[4845]: I0202 11:01:39.083990 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4gj6v"] Feb 02 11:01:39 crc kubenswrapper[4845]: I0202 11:01:39.723704 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:01:39 crc kubenswrapper[4845]: E0202 11:01:39.724285 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:01:39 crc kubenswrapper[4845]: I0202 11:01:39.724871 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13911fd9-043e-424e-ba84-da6af616a202" path="/var/lib/kubelet/pods/13911fd9-043e-424e-ba84-da6af616a202/volumes" Feb 02 11:01:39 crc kubenswrapper[4845]: I0202 11:01:39.725667 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa851884-d67b-4c70-8ad6-9dcf92001aa5" path="/var/lib/kubelet/pods/aa851884-d67b-4c70-8ad6-9dcf92001aa5/volumes" Feb 02 11:01:48 crc kubenswrapper[4845]: I0202 11:01:48.049227 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-c783-account-create-update-c8k62"] Feb 02 11:01:48 crc kubenswrapper[4845]: I0202 11:01:48.064805 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-z27qx"] Feb 02 11:01:48 crc kubenswrapper[4845]: I0202 11:01:48.075464 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-c783-account-create-update-c8k62"] Feb 02 11:01:48 crc kubenswrapper[4845]: I0202 11:01:48.086547 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-z27qx"] Feb 02 11:01:49 crc kubenswrapper[4845]: I0202 11:01:49.726542 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712e6155-a77e-4f9c-9d55-a6edab62e9a7" path="/var/lib/kubelet/pods/712e6155-a77e-4f9c-9d55-a6edab62e9a7/volumes" Feb 02 11:01:49 crc kubenswrapper[4845]: I0202 11:01:49.727840 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fdfb88-9683-4cc2-95f1-6ab55c558dfd" path="/var/lib/kubelet/pods/e0fdfb88-9683-4cc2-95f1-6ab55c558dfd/volumes" Feb 02 11:01:50 crc kubenswrapper[4845]: I0202 11:01:50.712941 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:01:50 crc kubenswrapper[4845]: E0202 11:01:50.713327 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:02:03 crc kubenswrapper[4845]: I0202 11:02:03.712792 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:02:03 crc kubenswrapper[4845]: E0202 11:02:03.713585 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:02:05 crc kubenswrapper[4845]: I0202 11:02:05.039043 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qqb26"] Feb 02 11:02:05 crc kubenswrapper[4845]: I0202 11:02:05.055929 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qqb26"] Feb 02 11:02:05 crc kubenswrapper[4845]: I0202 11:02:05.726454 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b9529c-8c20-47e9-8c19-910a31b30683" path="/var/lib/kubelet/pods/62b9529c-8c20-47e9-8c19-910a31b30683/volumes" Feb 02 11:02:07 crc kubenswrapper[4845]: I0202 11:02:07.549510 4845 scope.go:117] "RemoveContainer" containerID="2b3f2d6cbc2fbaafd3e7acd158c2d8862a6fa7d667477c7485ce11aa580584b6" Feb 02 11:02:07 crc kubenswrapper[4845]: I0202 11:02:07.586667 4845 scope.go:117] "RemoveContainer" containerID="e2da395f32221226555ec9a36b4c70b9ebd84972cf5fd496af49cd196e7172a2" Feb 02 11:02:07 crc kubenswrapper[4845]: I0202 11:02:07.640339 4845 scope.go:117] "RemoveContainer" containerID="7371036457c908a60d98a53f34d54f0d70618efbebcf97fbc3e3f1b041ae7110" Feb 02 11:02:07 crc kubenswrapper[4845]: I0202 11:02:07.704052 4845 scope.go:117] "RemoveContainer" containerID="54419ef52d04b18470afc9cd9fe1e7776568928b396efb56b1f79342767e7b05" Feb 02 11:02:07 crc kubenswrapper[4845]: I0202 11:02:07.758521 4845 scope.go:117] "RemoveContainer" containerID="8f242d53b8b45b41f78ed81a7dfba88bead6b72ed12835b100468efa95d3864d" Feb 02 11:02:07 crc kubenswrapper[4845]: I0202 11:02:07.844292 4845 scope.go:117] "RemoveContainer" containerID="a6fc1e9766e80a7882547aca33a905323116744093fc66e9ca25989843c77b7a" Feb 02 11:02:07 crc kubenswrapper[4845]: I0202 11:02:07.900116 4845 scope.go:117] "RemoveContainer" containerID="a93a594f3ef1f5fe22327995e92493ed98d1dba81262b5bcf617c4c84d0e3aba" Feb 02 11:02:07 crc kubenswrapper[4845]: I0202 11:02:07.952581 4845 scope.go:117] "RemoveContainer" containerID="f34968fe05a8bf94342bdc3d85ae1b9aa88e7cf9dc5bd5dd49c6ff1a1947185f" Feb 02 11:02:08 crc kubenswrapper[4845]: I0202 11:02:08.009843 4845 scope.go:117] "RemoveContainer" containerID="2206de97b65437900f5876967e6088126d1863f30884da3104dd45175b5b4a13" Feb 02 11:02:08 crc kubenswrapper[4845]: I0202 11:02:08.034047 4845 scope.go:117] "RemoveContainer" containerID="4440ff2b707ea7d06429dcadf4f2755be4ff5fe9e3d35fbb9b3d5449440ebcdb" Feb 02 11:02:08 crc kubenswrapper[4845]: I0202 11:02:08.067116 4845 scope.go:117] "RemoveContainer" containerID="cf759ca4ae8492477c32d3ee2af20b6a746da7ed09b5646264fc1f94d46044d6" Feb 02 11:02:08 crc kubenswrapper[4845]: I0202 11:02:08.098830 4845 scope.go:117] "RemoveContainer" containerID="3459391ca6853a62f9d027449b7b8a2cf779254301205a614a5d854038e87879" Feb 02 11:02:08 crc kubenswrapper[4845]: I0202 11:02:08.128108 4845 scope.go:117] "RemoveContainer" containerID="c7c52a689d78c280fc8b76cead8491d0d675cc8f51ac92d599f8ca929b26e719" Feb 02 11:02:11 crc kubenswrapper[4845]: I0202 11:02:11.054424 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-8ggwt"] Feb 02 11:02:11 crc kubenswrapper[4845]: I0202 11:02:11.069704 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-8ggwt"] Feb 02 11:02:11 crc kubenswrapper[4845]: I0202 11:02:11.730496 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367466e2-34f1-4f2c-9e11-eb6c24c5318c" path="/var/lib/kubelet/pods/367466e2-34f1-4f2c-9e11-eb6c24c5318c/volumes" Feb 02 11:02:12 crc kubenswrapper[4845]: I0202 11:02:12.034299 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-wnlhd"] Feb 02 11:02:12 crc kubenswrapper[4845]: I0202 11:02:12.047708 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-h6qld"] Feb 02 11:02:12 crc kubenswrapper[4845]: I0202 11:02:12.059544 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-wnlhd"] Feb 02 11:02:12 crc kubenswrapper[4845]: I0202 11:02:12.075054 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-h6qld"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.038497 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-phm7s"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.061808 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ee84-account-create-update-f2n87"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.072635 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ee84-account-create-update-f2n87"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.083381 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3a8c-account-create-update-qmrlh"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.093628 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-phm7s"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.103587 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3a8c-account-create-update-qmrlh"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.113922 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-edbb-account-create-update-gt7ll"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.124157 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-edbb-account-create-update-gt7ll"] Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.725901 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf66acf-0a94-4850-913b-711b19b88dd3" path="/var/lib/kubelet/pods/2cf66acf-0a94-4850-913b-711b19b88dd3/volumes" Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.726630 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e0fd8e-0f85-48be-b690-c11e3c09f340" path="/var/lib/kubelet/pods/37e0fd8e-0f85-48be-b690-c11e3c09f340/volumes" Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.727245 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e" path="/var/lib/kubelet/pods/45ea84b0-b5f2-4a74-8f6a-67b4176e5d1e/volumes" Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.727820 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad1fe923-0409-4c3c-869c-9d0c09a2506a" path="/var/lib/kubelet/pods/ad1fe923-0409-4c3c-869c-9d0c09a2506a/volumes" Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.728867 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afec66f7-184b-44f1-a172-b1e78739309d" path="/var/lib/kubelet/pods/afec66f7-184b-44f1-a172-b1e78739309d/volumes" Feb 02 11:02:13 crc kubenswrapper[4845]: I0202 11:02:13.729483 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efb890e0-ca91-4204-8e4b-9036a64e56e1" path="/var/lib/kubelet/pods/efb890e0-ca91-4204-8e4b-9036a64e56e1/volumes" Feb 02 11:02:14 crc kubenswrapper[4845]: I0202 11:02:14.712690 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:02:14 crc kubenswrapper[4845]: E0202 11:02:14.713037 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:02:17 crc kubenswrapper[4845]: I0202 11:02:17.034170 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cba3-account-create-update-bph8b"] Feb 02 11:02:17 crc kubenswrapper[4845]: I0202 11:02:17.045756 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cba3-account-create-update-bph8b"] Feb 02 11:02:17 crc kubenswrapper[4845]: I0202 11:02:17.726274 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af47917-824a-452b-b0db-03ad3f4861df" path="/var/lib/kubelet/pods/9af47917-824a-452b-b0db-03ad3f4861df/volumes" Feb 02 11:02:18 crc kubenswrapper[4845]: I0202 11:02:18.058267 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kgn95"] Feb 02 11:02:18 crc kubenswrapper[4845]: I0202 11:02:18.073249 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kgn95"] Feb 02 11:02:19 crc kubenswrapper[4845]: I0202 11:02:19.726046 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34877df4-b654-4e0c-ac67-da6fd95c249d" path="/var/lib/kubelet/pods/34877df4-b654-4e0c-ac67-da6fd95c249d/volumes" Feb 02 11:02:24 crc kubenswrapper[4845]: I0202 11:02:24.047779 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-jbstq"] Feb 02 11:02:24 crc kubenswrapper[4845]: I0202 11:02:24.064945 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-jbstq"] Feb 02 11:02:25 crc kubenswrapper[4845]: I0202 11:02:25.743249 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967b449a-1414-4a5c-b625-bcaf12b17ade" path="/var/lib/kubelet/pods/967b449a-1414-4a5c-b625-bcaf12b17ade/volumes" Feb 02 11:02:29 crc kubenswrapper[4845]: I0202 11:02:29.723825 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:02:30 crc kubenswrapper[4845]: I0202 11:02:30.698812 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"0b390714f50dcbc5e75002b4a0473fa9805c797be6ccee1c185ac005a97bc29e"} Feb 02 11:02:56 crc kubenswrapper[4845]: I0202 11:02:56.056247 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cpdt4"] Feb 02 11:02:56 crc kubenswrapper[4845]: I0202 11:02:56.073145 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cpdt4"] Feb 02 11:02:57 crc kubenswrapper[4845]: I0202 11:02:57.729456 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2" path="/var/lib/kubelet/pods/856e9eb8-5bdb-40d0-9ad7-dec8d2594fc2/volumes" Feb 02 11:03:07 crc kubenswrapper[4845]: I0202 11:03:07.043743 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gxjbc"] Feb 02 11:03:07 crc kubenswrapper[4845]: I0202 11:03:07.061559 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f7js4"] Feb 02 11:03:07 crc kubenswrapper[4845]: I0202 11:03:07.078204 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f7js4"] Feb 02 11:03:07 crc kubenswrapper[4845]: I0202 11:03:07.093041 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gxjbc"] Feb 02 11:03:07 crc kubenswrapper[4845]: I0202 11:03:07.728252 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b" path="/var/lib/kubelet/pods/1cb74cc1-ae6a-46c7-8ce6-ba4d353ce47b/volumes" Feb 02 11:03:07 crc kubenswrapper[4845]: I0202 11:03:07.729520 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3aa591-f1f0-4264-a970-d8172cc24781" path="/var/lib/kubelet/pods/ae3aa591-f1f0-4264-a970-d8172cc24781/volumes" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.037286 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-kxrm5"] Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.050541 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-kxrm5"] Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.400080 4845 scope.go:117] "RemoveContainer" containerID="c026c97f3f623cb46f500e205081203362ba8f0b275d0368c0ea74ac7d34d244" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.439648 4845 scope.go:117] "RemoveContainer" containerID="eaea8c79134a763eb67941fd983f9ce0e269d99bbc22b44421da606e30805a94" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.531720 4845 scope.go:117] "RemoveContainer" containerID="7d589ab274ae62a36101e94b25da0bcc5210eda8997714fcc496cd1866ddd622" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.583421 4845 scope.go:117] "RemoveContainer" containerID="5c1dc639a0ba7e9ddef0cb628d1e688f84eee0dbcad460df6e423cdfb04749bd" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.665306 4845 scope.go:117] "RemoveContainer" containerID="3a7ec3be2e02a83d1849c451b822789235edc5a4672079139f59894d6d036a70" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.747753 4845 scope.go:117] "RemoveContainer" containerID="a8fcb11c46488e7ab4d44f9b73e21b0ab99aab04b639ff39da8c3dcc0a64fd01" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.800067 4845 scope.go:117] "RemoveContainer" containerID="493cf0ea40e1ecb4c2bc2c0fc9bcd32cc6e220ecfec73e51aec90faf9abebac3" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.847787 4845 scope.go:117] "RemoveContainer" containerID="51bb841af27a85ca68abff1f32dcaf10c9ab7f03618a7c256ab9040498ec70ed" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.897508 4845 scope.go:117] "RemoveContainer" containerID="c4fb16737964c587428fe338ca95d6abb864bb90373d40cc0d8bd05a89c69fe2" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.926948 4845 scope.go:117] "RemoveContainer" containerID="e50f62cc9707edb269ffe6698207592dfbc48d0ade6ff635de9614b2c3d62a34" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.957984 4845 scope.go:117] "RemoveContainer" containerID="b23d76a377763a9782e43474e58cb72f1e55efdf39ac9b7aaca3beca20c268f7" Feb 02 11:03:08 crc kubenswrapper[4845]: I0202 11:03:08.996238 4845 scope.go:117] "RemoveContainer" containerID="f8e4a4bf00801e300e8c97b01bb80d6e16ede05a4fe2abe38abfdf7564fa62f4" Feb 02 11:03:09 crc kubenswrapper[4845]: I0202 11:03:09.037535 4845 scope.go:117] "RemoveContainer" containerID="055042cb2d15b6eebd454cdc9f356a4e981de86dda6b046eef4e17f7f79f827f" Feb 02 11:03:09 crc kubenswrapper[4845]: I0202 11:03:09.046836 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hft5g"] Feb 02 11:03:09 crc kubenswrapper[4845]: I0202 11:03:09.064194 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-hft5g"] Feb 02 11:03:09 crc kubenswrapper[4845]: I0202 11:03:09.728932 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250e18d9-cb14-4309-8d0c-fb341511dba6" path="/var/lib/kubelet/pods/250e18d9-cb14-4309-8d0c-fb341511dba6/volumes" Feb 02 11:03:09 crc kubenswrapper[4845]: I0202 11:03:09.729994 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9868fb5b-b18e-42b0-8532-6e6a55da71d2" path="/var/lib/kubelet/pods/9868fb5b-b18e-42b0-8532-6e6a55da71d2/volumes" Feb 02 11:03:34 crc kubenswrapper[4845]: I0202 11:03:34.046410 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-g8b4r"] Feb 02 11:03:34 crc kubenswrapper[4845]: I0202 11:03:34.057306 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-g8b4r"] Feb 02 11:03:35 crc kubenswrapper[4845]: I0202 11:03:35.731300 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183b0ef9-490f-43a1-a464-2bd64a820ebd" path="/var/lib/kubelet/pods/183b0ef9-490f-43a1-a464-2bd64a820ebd/volumes" Feb 02 11:04:09 crc kubenswrapper[4845]: I0202 11:04:09.462459 4845 scope.go:117] "RemoveContainer" containerID="b6c0a29825672ec889a1c4e9480e6e2959d05e730a7944a0c2ac39bff41e3be4" Feb 02 11:04:09 crc kubenswrapper[4845]: I0202 11:04:09.508159 4845 scope.go:117] "RemoveContainer" containerID="bf4552b15f381b58f4ac832ee06ab76b9eb11fbeaace10aae25c2f5b9bfbac69" Feb 02 11:04:09 crc kubenswrapper[4845]: I0202 11:04:09.574718 4845 scope.go:117] "RemoveContainer" containerID="1ae234554f1bf061b9f12986d072427a2f50a26aa3b60b9c9cf12a5ccc0e8cce" Feb 02 11:04:32 crc kubenswrapper[4845]: I0202 11:04:32.041806 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8fed-account-create-update-p76r4"] Feb 02 11:04:32 crc kubenswrapper[4845]: I0202 11:04:32.052875 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8fed-account-create-update-p76r4"] Feb 02 11:04:32 crc kubenswrapper[4845]: I0202 11:04:32.076282 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p6lbd"] Feb 02 11:04:32 crc kubenswrapper[4845]: I0202 11:04:32.088138 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p6lbd"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.026922 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vkhkq"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.038056 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-69t2n"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.050212 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vkhkq"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.062440 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4ab2-account-create-update-l992n"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.086582 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-69t2n"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.100453 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4ab2-account-create-update-l992n"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.111870 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-dfd5-account-create-update-9mdwn"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.121967 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-dfd5-account-create-update-9mdwn"] Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.727055 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a7f3c4-2a4a-4d07-91ee-27a63961c272" path="/var/lib/kubelet/pods/08a7f3c4-2a4a-4d07-91ee-27a63961c272/volumes" Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.728345 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b758e3-acc2-451a-b64d-9c53a7e5f98f" path="/var/lib/kubelet/pods/36b758e3-acc2-451a-b64d-9c53a7e5f98f/volumes" Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.729223 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621aa5b7-f496-48f4-a72d-74e8886f813e" path="/var/lib/kubelet/pods/621aa5b7-f496-48f4-a72d-74e8886f813e/volumes" Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.730031 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65acb40f-b003-4d37-93c0-4198beba28ed" path="/var/lib/kubelet/pods/65acb40f-b003-4d37-93c0-4198beba28ed/volumes" Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.731621 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e02369-64e6-46f8-a84d-f50396230784" path="/var/lib/kubelet/pods/93e02369-64e6-46f8-a84d-f50396230784/volumes" Feb 02 11:04:33 crc kubenswrapper[4845]: I0202 11:04:33.734050 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ca8c7e-f45d-4014-9599-2ba08495811f" path="/var/lib/kubelet/pods/b9ca8c7e-f45d-4014-9599-2ba08495811f/volumes" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.360612 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g2rm9"] Feb 02 11:04:40 crc kubenswrapper[4845]: E0202 11:04:40.361789 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7303667b-89bb-4ad1-92a8-3c94525911d4" containerName="keystone-cron" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.361806 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7303667b-89bb-4ad1-92a8-3c94525911d4" containerName="keystone-cron" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.362071 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7303667b-89bb-4ad1-92a8-3c94525911d4" containerName="keystone-cron" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.364497 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.376240 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2rm9"] Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.497061 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-utilities\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.497261 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ttzc\" (UniqueName: \"kubernetes.io/projected/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-kube-api-access-6ttzc\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.497491 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-catalog-content\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.599154 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-utilities\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.599492 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ttzc\" (UniqueName: \"kubernetes.io/projected/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-kube-api-access-6ttzc\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.599657 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-catalog-content\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.599841 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-utilities\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.600174 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-catalog-content\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.623678 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ttzc\" (UniqueName: \"kubernetes.io/projected/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-kube-api-access-6ttzc\") pod \"redhat-marketplace-g2rm9\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:40 crc kubenswrapper[4845]: I0202 11:04:40.711039 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:41 crc kubenswrapper[4845]: I0202 11:04:41.222103 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2rm9"] Feb 02 11:04:41 crc kubenswrapper[4845]: I0202 11:04:41.258211 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2rm9" event={"ID":"a5d969d0-588a-489d-a3b7-936e8e2f0c4e","Type":"ContainerStarted","Data":"83963d09faf5438bbb2bea1fea716dfae030b316cca0108f56c9404d7d2ecfc4"} Feb 02 11:04:42 crc kubenswrapper[4845]: I0202 11:04:42.282836 4845 generic.go:334] "Generic (PLEG): container finished" podID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerID="4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220" exitCode=0 Feb 02 11:04:42 crc kubenswrapper[4845]: I0202 11:04:42.283048 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2rm9" event={"ID":"a5d969d0-588a-489d-a3b7-936e8e2f0c4e","Type":"ContainerDied","Data":"4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220"} Feb 02 11:04:42 crc kubenswrapper[4845]: I0202 11:04:42.285744 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.294501 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2rm9" event={"ID":"a5d969d0-588a-489d-a3b7-936e8e2f0c4e","Type":"ContainerStarted","Data":"58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c"} Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.550685 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8c6ts"] Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.553117 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.562730 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8c6ts"] Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.569481 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-catalog-content\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.569607 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmthk\" (UniqueName: \"kubernetes.io/projected/ae5f408f-28d9-4652-a265-c49fa34ab604-kube-api-access-mmthk\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.569811 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-utilities\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.673514 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-catalog-content\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.673684 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmthk\" (UniqueName: \"kubernetes.io/projected/ae5f408f-28d9-4652-a265-c49fa34ab604-kube-api-access-mmthk\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.673908 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-utilities\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.674225 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-catalog-content\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.674627 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-utilities\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.706765 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmthk\" (UniqueName: \"kubernetes.io/projected/ae5f408f-28d9-4652-a265-c49fa34ab604-kube-api-access-mmthk\") pod \"certified-operators-8c6ts\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:43 crc kubenswrapper[4845]: I0202 11:04:43.870803 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:44 crc kubenswrapper[4845]: I0202 11:04:44.307967 4845 generic.go:334] "Generic (PLEG): container finished" podID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerID="58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c" exitCode=0 Feb 02 11:04:44 crc kubenswrapper[4845]: I0202 11:04:44.308028 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2rm9" event={"ID":"a5d969d0-588a-489d-a3b7-936e8e2f0c4e","Type":"ContainerDied","Data":"58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c"} Feb 02 11:04:44 crc kubenswrapper[4845]: I0202 11:04:44.410448 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8c6ts"] Feb 02 11:04:45 crc kubenswrapper[4845]: I0202 11:04:45.324204 4845 generic.go:334] "Generic (PLEG): container finished" podID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerID="26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79" exitCode=0 Feb 02 11:04:45 crc kubenswrapper[4845]: I0202 11:04:45.324571 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6ts" event={"ID":"ae5f408f-28d9-4652-a265-c49fa34ab604","Type":"ContainerDied","Data":"26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79"} Feb 02 11:04:45 crc kubenswrapper[4845]: I0202 11:04:45.324652 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6ts" event={"ID":"ae5f408f-28d9-4652-a265-c49fa34ab604","Type":"ContainerStarted","Data":"b1870972ad32591386243137d4d6406259a3aed6d72d8c02868ba64bfdba894c"} Feb 02 11:04:45 crc kubenswrapper[4845]: I0202 11:04:45.329671 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2rm9" event={"ID":"a5d969d0-588a-489d-a3b7-936e8e2f0c4e","Type":"ContainerStarted","Data":"5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb"} Feb 02 11:04:45 crc kubenswrapper[4845]: I0202 11:04:45.395877 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g2rm9" podStartSLOduration=2.956421893 podStartE2EDuration="5.395853667s" podCreationTimestamp="2026-02-02 11:04:40 +0000 UTC" firstStartedPulling="2026-02-02 11:04:42.28553992 +0000 UTC m=+1963.376941370" lastFinishedPulling="2026-02-02 11:04:44.724971694 +0000 UTC m=+1965.816373144" observedRunningTime="2026-02-02 11:04:45.371773497 +0000 UTC m=+1966.463174957" watchObservedRunningTime="2026-02-02 11:04:45.395853667 +0000 UTC m=+1966.487255117" Feb 02 11:04:46 crc kubenswrapper[4845]: I0202 11:04:46.237741 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:04:46 crc kubenswrapper[4845]: I0202 11:04:46.238063 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:04:47 crc kubenswrapper[4845]: I0202 11:04:47.354401 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6ts" event={"ID":"ae5f408f-28d9-4652-a265-c49fa34ab604","Type":"ContainerStarted","Data":"80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03"} Feb 02 11:04:48 crc kubenswrapper[4845]: I0202 11:04:48.371720 4845 generic.go:334] "Generic (PLEG): container finished" podID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerID="80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03" exitCode=0 Feb 02 11:04:48 crc kubenswrapper[4845]: I0202 11:04:48.371781 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6ts" event={"ID":"ae5f408f-28d9-4652-a265-c49fa34ab604","Type":"ContainerDied","Data":"80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03"} Feb 02 11:04:49 crc kubenswrapper[4845]: I0202 11:04:49.387725 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6ts" event={"ID":"ae5f408f-28d9-4652-a265-c49fa34ab604","Type":"ContainerStarted","Data":"3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae"} Feb 02 11:04:49 crc kubenswrapper[4845]: I0202 11:04:49.415494 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8c6ts" podStartSLOduration=2.68541841 podStartE2EDuration="6.415473244s" podCreationTimestamp="2026-02-02 11:04:43 +0000 UTC" firstStartedPulling="2026-02-02 11:04:45.328179589 +0000 UTC m=+1966.419581039" lastFinishedPulling="2026-02-02 11:04:49.058234433 +0000 UTC m=+1970.149635873" observedRunningTime="2026-02-02 11:04:49.41041358 +0000 UTC m=+1970.501815040" watchObservedRunningTime="2026-02-02 11:04:49.415473244 +0000 UTC m=+1970.506874694" Feb 02 11:04:50 crc kubenswrapper[4845]: I0202 11:04:50.711670 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:50 crc kubenswrapper[4845]: I0202 11:04:50.711845 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:50 crc kubenswrapper[4845]: I0202 11:04:50.772588 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:51 crc kubenswrapper[4845]: I0202 11:04:51.464357 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:52 crc kubenswrapper[4845]: I0202 11:04:52.343304 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2rm9"] Feb 02 11:04:53 crc kubenswrapper[4845]: I0202 11:04:53.427791 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g2rm9" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerName="registry-server" containerID="cri-o://5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb" gracePeriod=2 Feb 02 11:04:53 crc kubenswrapper[4845]: I0202 11:04:53.871131 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:53 crc kubenswrapper[4845]: I0202 11:04:53.871512 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:53 crc kubenswrapper[4845]: I0202 11:04:53.932899 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.003419 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.064471 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0d03-account-create-update-79bpm"] Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.080525 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-mjh66"] Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.092403 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-mjh66"] Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.102962 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0d03-account-create-update-79bpm"] Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.178381 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-catalog-content\") pod \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.178566 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ttzc\" (UniqueName: \"kubernetes.io/projected/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-kube-api-access-6ttzc\") pod \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.178740 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-utilities\") pod \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\" (UID: \"a5d969d0-588a-489d-a3b7-936e8e2f0c4e\") " Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.180242 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-utilities" (OuterVolumeSpecName: "utilities") pod "a5d969d0-588a-489d-a3b7-936e8e2f0c4e" (UID: "a5d969d0-588a-489d-a3b7-936e8e2f0c4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.186543 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-kube-api-access-6ttzc" (OuterVolumeSpecName: "kube-api-access-6ttzc") pod "a5d969d0-588a-489d-a3b7-936e8e2f0c4e" (UID: "a5d969d0-588a-489d-a3b7-936e8e2f0c4e"). InnerVolumeSpecName "kube-api-access-6ttzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.206494 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5d969d0-588a-489d-a3b7-936e8e2f0c4e" (UID: "a5d969d0-588a-489d-a3b7-936e8e2f0c4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.284011 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.284067 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.284083 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ttzc\" (UniqueName: \"kubernetes.io/projected/a5d969d0-588a-489d-a3b7-936e8e2f0c4e-kube-api-access-6ttzc\") on node \"crc\" DevicePath \"\"" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.445852 4845 generic.go:334] "Generic (PLEG): container finished" podID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerID="5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb" exitCode=0 Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.445973 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2rm9" event={"ID":"a5d969d0-588a-489d-a3b7-936e8e2f0c4e","Type":"ContainerDied","Data":"5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb"} Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.446103 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2rm9" event={"ID":"a5d969d0-588a-489d-a3b7-936e8e2f0c4e","Type":"ContainerDied","Data":"83963d09faf5438bbb2bea1fea716dfae030b316cca0108f56c9404d7d2ecfc4"} Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.446145 4845 scope.go:117] "RemoveContainer" containerID="5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.446143 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2rm9" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.494345 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2rm9"] Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.497247 4845 scope.go:117] "RemoveContainer" containerID="58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.513289 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2rm9"] Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.521227 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.531931 4845 scope.go:117] "RemoveContainer" containerID="4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.607153 4845 scope.go:117] "RemoveContainer" containerID="5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb" Feb 02 11:04:54 crc kubenswrapper[4845]: E0202 11:04:54.607787 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb\": container with ID starting with 5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb not found: ID does not exist" containerID="5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.607835 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb"} err="failed to get container status \"5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb\": rpc error: code = NotFound desc = could not find container \"5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb\": container with ID starting with 5614282f6fcb67b0799762bc9a609e2fe90e0c2c698f6852a2ae79d0cc58c6eb not found: ID does not exist" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.607867 4845 scope.go:117] "RemoveContainer" containerID="58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c" Feb 02 11:04:54 crc kubenswrapper[4845]: E0202 11:04:54.608542 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c\": container with ID starting with 58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c not found: ID does not exist" containerID="58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.608606 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c"} err="failed to get container status \"58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c\": rpc error: code = NotFound desc = could not find container \"58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c\": container with ID starting with 58e792a966d78256f900ef7ef7a4255a79fe30a8f09058b484b2d1639fe6d89c not found: ID does not exist" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.608643 4845 scope.go:117] "RemoveContainer" containerID="4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220" Feb 02 11:04:54 crc kubenswrapper[4845]: E0202 11:04:54.609009 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220\": container with ID starting with 4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220 not found: ID does not exist" containerID="4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220" Feb 02 11:04:54 crc kubenswrapper[4845]: I0202 11:04:54.609144 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220"} err="failed to get container status \"4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220\": rpc error: code = NotFound desc = could not find container \"4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220\": container with ID starting with 4b94519ad73f936d86efd08640937b6c1ba7a905b0250910dd3b103f84645220 not found: ID does not exist" Feb 02 11:04:55 crc kubenswrapper[4845]: I0202 11:04:55.747025 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c902530-dc88-4300-9356-1f3938cfef4a" path="/var/lib/kubelet/pods/7c902530-dc88-4300-9356-1f3938cfef4a/volumes" Feb 02 11:04:55 crc kubenswrapper[4845]: I0202 11:04:55.749797 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" path="/var/lib/kubelet/pods/a5d969d0-588a-489d-a3b7-936e8e2f0c4e/volumes" Feb 02 11:04:55 crc kubenswrapper[4845]: I0202 11:04:55.753119 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45c3661-e66b-41a2-9a98-db215df0b2cf" path="/var/lib/kubelet/pods/f45c3661-e66b-41a2-9a98-db215df0b2cf/volumes" Feb 02 11:04:56 crc kubenswrapper[4845]: I0202 11:04:56.344421 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8c6ts"] Feb 02 11:04:56 crc kubenswrapper[4845]: I0202 11:04:56.482641 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8c6ts" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerName="registry-server" containerID="cri-o://3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae" gracePeriod=2 Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.015941 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.064819 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-catalog-content\") pod \"ae5f408f-28d9-4652-a265-c49fa34ab604\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.065119 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-utilities\") pod \"ae5f408f-28d9-4652-a265-c49fa34ab604\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.065202 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmthk\" (UniqueName: \"kubernetes.io/projected/ae5f408f-28d9-4652-a265-c49fa34ab604-kube-api-access-mmthk\") pod \"ae5f408f-28d9-4652-a265-c49fa34ab604\" (UID: \"ae5f408f-28d9-4652-a265-c49fa34ab604\") " Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.066875 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-utilities" (OuterVolumeSpecName: "utilities") pod "ae5f408f-28d9-4652-a265-c49fa34ab604" (UID: "ae5f408f-28d9-4652-a265-c49fa34ab604"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.075376 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5f408f-28d9-4652-a265-c49fa34ab604-kube-api-access-mmthk" (OuterVolumeSpecName: "kube-api-access-mmthk") pod "ae5f408f-28d9-4652-a265-c49fa34ab604" (UID: "ae5f408f-28d9-4652-a265-c49fa34ab604"). InnerVolumeSpecName "kube-api-access-mmthk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.131226 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae5f408f-28d9-4652-a265-c49fa34ab604" (UID: "ae5f408f-28d9-4652-a265-c49fa34ab604"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.169847 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.169920 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae5f408f-28d9-4652-a265-c49fa34ab604-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.169932 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmthk\" (UniqueName: \"kubernetes.io/projected/ae5f408f-28d9-4652-a265-c49fa34ab604-kube-api-access-mmthk\") on node \"crc\" DevicePath \"\"" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.495674 4845 generic.go:334] "Generic (PLEG): container finished" podID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerID="3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae" exitCode=0 Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.496045 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6ts" event={"ID":"ae5f408f-28d9-4652-a265-c49fa34ab604","Type":"ContainerDied","Data":"3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae"} Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.496097 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c6ts" event={"ID":"ae5f408f-28d9-4652-a265-c49fa34ab604","Type":"ContainerDied","Data":"b1870972ad32591386243137d4d6406259a3aed6d72d8c02868ba64bfdba894c"} Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.496105 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c6ts" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.496120 4845 scope.go:117] "RemoveContainer" containerID="3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.522228 4845 scope.go:117] "RemoveContainer" containerID="80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.549829 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8c6ts"] Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.562237 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8c6ts"] Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.568110 4845 scope.go:117] "RemoveContainer" containerID="26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.619251 4845 scope.go:117] "RemoveContainer" containerID="3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae" Feb 02 11:04:57 crc kubenswrapper[4845]: E0202 11:04:57.619973 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae\": container with ID starting with 3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae not found: ID does not exist" containerID="3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.620010 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae"} err="failed to get container status \"3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae\": rpc error: code = NotFound desc = could not find container \"3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae\": container with ID starting with 3ac8acdbdb10a2f21e5876abcb2d3667bfaec600f7a206f9b9e03ca620f48dae not found: ID does not exist" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.620037 4845 scope.go:117] "RemoveContainer" containerID="80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03" Feb 02 11:04:57 crc kubenswrapper[4845]: E0202 11:04:57.620562 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03\": container with ID starting with 80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03 not found: ID does not exist" containerID="80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.620640 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03"} err="failed to get container status \"80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03\": rpc error: code = NotFound desc = could not find container \"80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03\": container with ID starting with 80e7346a150d6048bdbfda0a75a9f91d29b4a7dfc459101707ba10e54a0fac03 not found: ID does not exist" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.620694 4845 scope.go:117] "RemoveContainer" containerID="26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79" Feb 02 11:04:57 crc kubenswrapper[4845]: E0202 11:04:57.621126 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79\": container with ID starting with 26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79 not found: ID does not exist" containerID="26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.621157 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79"} err="failed to get container status \"26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79\": rpc error: code = NotFound desc = could not find container \"26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79\": container with ID starting with 26d9f500c910a918a9b073b63ed2836fd48901c0e0db17c8b0223fb520c3df79 not found: ID does not exist" Feb 02 11:04:57 crc kubenswrapper[4845]: I0202 11:04:57.727819 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" path="/var/lib/kubelet/pods/ae5f408f-28d9-4652-a265-c49fa34ab604/volumes" Feb 02 11:05:09 crc kubenswrapper[4845]: I0202 11:05:09.705254 4845 scope.go:117] "RemoveContainer" containerID="7bd7e7f5964f9f073fd42624c4ca749f932b5e04b1db26f0a1d5d0f87ecbdb1f" Feb 02 11:05:09 crc kubenswrapper[4845]: I0202 11:05:09.729842 4845 scope.go:117] "RemoveContainer" containerID="b7177510879626b8e93e3bec07d3378fb19c2869cf116cbd055fdaad370ef7da" Feb 02 11:05:09 crc kubenswrapper[4845]: I0202 11:05:09.798625 4845 scope.go:117] "RemoveContainer" containerID="74d2907667c2c914fc57daa5fed9146112b93f8bf9f401e92958c5811e8dc6f3" Feb 02 11:05:09 crc kubenswrapper[4845]: I0202 11:05:09.861515 4845 scope.go:117] "RemoveContainer" containerID="6019d47869688d15147a8932c0b84dab21fa14445dd8f012a8b145c6bed8a74d" Feb 02 11:05:09 crc kubenswrapper[4845]: I0202 11:05:09.922846 4845 scope.go:117] "RemoveContainer" containerID="8ac663309bf94373ab6aee2bd864d56af41199dceccdc883fa0aff4b2aa502f8" Feb 02 11:05:09 crc kubenswrapper[4845]: I0202 11:05:09.985811 4845 scope.go:117] "RemoveContainer" containerID="45867aad16ddef2c9f5a1df92c6f8cab6d7f01caa36027be8a34aecad2ed798b" Feb 02 11:05:10 crc kubenswrapper[4845]: I0202 11:05:10.037311 4845 scope.go:117] "RemoveContainer" containerID="d9f965fd9f9dbd88dc973b197a2f3c57c9e0f963db241ff90038d5e96106751f" Feb 02 11:05:10 crc kubenswrapper[4845]: I0202 11:05:10.061350 4845 scope.go:117] "RemoveContainer" containerID="8360a8bbf698c5c922fa9756941c2a4df903063c7069ac543b4e17f4e1f546d5" Feb 02 11:05:16 crc kubenswrapper[4845]: I0202 11:05:16.061086 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qwpzq"] Feb 02 11:05:16 crc kubenswrapper[4845]: I0202 11:05:16.073384 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qwpzq"] Feb 02 11:05:16 crc kubenswrapper[4845]: I0202 11:05:16.238282 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:05:16 crc kubenswrapper[4845]: I0202 11:05:16.238381 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:05:17 crc kubenswrapper[4845]: I0202 11:05:17.726550 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff" path="/var/lib/kubelet/pods/cbc0d18f-cd37-49cf-8aa1-b62f1cf533ff/volumes" Feb 02 11:05:40 crc kubenswrapper[4845]: I0202 11:05:40.065798 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-725jn"] Feb 02 11:05:40 crc kubenswrapper[4845]: I0202 11:05:40.083766 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-725jn"] Feb 02 11:05:41 crc kubenswrapper[4845]: I0202 11:05:41.727381 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7439e987-75e8-4cc8-840a-742c6f07dea9" path="/var/lib/kubelet/pods/7439e987-75e8-4cc8-840a-742c6f07dea9/volumes" Feb 02 11:05:42 crc kubenswrapper[4845]: I0202 11:05:42.038072 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r4lj7"] Feb 02 11:05:42 crc kubenswrapper[4845]: I0202 11:05:42.054864 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-r4lj7"] Feb 02 11:05:43 crc kubenswrapper[4845]: I0202 11:05:43.727133 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2bad3a-8153-41d8-83f6-9f9caa16589b" path="/var/lib/kubelet/pods/7b2bad3a-8153-41d8-83f6-9f9caa16589b/volumes" Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.237502 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.237827 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.237876 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.238901 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b390714f50dcbc5e75002b4a0473fa9805c797be6ccee1c185ac005a97bc29e"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.238981 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://0b390714f50dcbc5e75002b4a0473fa9805c797be6ccee1c185ac005a97bc29e" gracePeriod=600 Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.993803 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="0b390714f50dcbc5e75002b4a0473fa9805c797be6ccee1c185ac005a97bc29e" exitCode=0 Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.993910 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"0b390714f50dcbc5e75002b4a0473fa9805c797be6ccee1c185ac005a97bc29e"} Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.994456 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc"} Feb 02 11:05:46 crc kubenswrapper[4845]: I0202 11:05:46.994500 4845 scope.go:117] "RemoveContainer" containerID="d11ef40d44057c492efbc2859a87d8dd29e6dc58cef9dafd0bcece48331b0020" Feb 02 11:06:10 crc kubenswrapper[4845]: I0202 11:06:10.285352 4845 scope.go:117] "RemoveContainer" containerID="4bd5dbe3f7a8b7903c6ee652f72c588239fb784e24ebb2f47f5ca3b9452668fa" Feb 02 11:06:10 crc kubenswrapper[4845]: I0202 11:06:10.351927 4845 scope.go:117] "RemoveContainer" containerID="7ebfe4b502f94b649860108409e8e9b93586764606d6f176206def30cf86a61d" Feb 02 11:06:10 crc kubenswrapper[4845]: I0202 11:06:10.426404 4845 scope.go:117] "RemoveContainer" containerID="f926f2a33b190bee678900de2f6fcfec869f9b6aacd707a0eedb2404459e01e7" Feb 02 11:06:26 crc kubenswrapper[4845]: I0202 11:06:26.048008 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5zwdv"] Feb 02 11:06:26 crc kubenswrapper[4845]: I0202 11:06:26.059932 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5zwdv"] Feb 02 11:06:27 crc kubenswrapper[4845]: I0202 11:06:27.726139 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b" path="/var/lib/kubelet/pods/2b73ddcb-6457-4dd4-9d61-7ede43d6ea4b/volumes" Feb 02 11:07:10 crc kubenswrapper[4845]: I0202 11:07:10.582357 4845 scope.go:117] "RemoveContainer" containerID="8584a98006ccb553e74a988ef9d575c49271ebfc304ac836fbd4745ff8e13b8d" Feb 02 11:07:46 crc kubenswrapper[4845]: I0202 11:07:46.237774 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:07:46 crc kubenswrapper[4845]: I0202 11:07:46.238355 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.892110 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qqwd7"] Feb 02 11:08:09 crc kubenswrapper[4845]: E0202 11:08:09.893056 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerName="registry-server" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.893069 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerName="registry-server" Feb 02 11:08:09 crc kubenswrapper[4845]: E0202 11:08:09.893082 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerName="extract-utilities" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.893088 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerName="extract-utilities" Feb 02 11:08:09 crc kubenswrapper[4845]: E0202 11:08:09.893102 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerName="extract-content" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.893110 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerName="extract-content" Feb 02 11:08:09 crc kubenswrapper[4845]: E0202 11:08:09.893123 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerName="extract-utilities" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.893130 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerName="extract-utilities" Feb 02 11:08:09 crc kubenswrapper[4845]: E0202 11:08:09.893151 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerName="registry-server" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.893157 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerName="registry-server" Feb 02 11:08:09 crc kubenswrapper[4845]: E0202 11:08:09.893170 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerName="extract-content" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.893175 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerName="extract-content" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.893420 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5f408f-28d9-4652-a265-c49fa34ab604" containerName="registry-server" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.893442 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d969d0-588a-489d-a3b7-936e8e2f0c4e" containerName="registry-server" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.895054 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:09 crc kubenswrapper[4845]: I0202 11:08:09.912957 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qqwd7"] Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.041273 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-catalog-content\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.041508 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcbwq\" (UniqueName: \"kubernetes.io/projected/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-kube-api-access-qcbwq\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.041977 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-utilities\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.145036 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-catalog-content\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.145132 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcbwq\" (UniqueName: \"kubernetes.io/projected/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-kube-api-access-qcbwq\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.145310 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-utilities\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.145568 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-catalog-content\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.145955 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-utilities\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.168053 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcbwq\" (UniqueName: \"kubernetes.io/projected/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-kube-api-access-qcbwq\") pod \"community-operators-qqwd7\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.218569 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:10 crc kubenswrapper[4845]: I0202 11:08:10.873163 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qqwd7"] Feb 02 11:08:11 crc kubenswrapper[4845]: I0202 11:08:11.588422 4845 generic.go:334] "Generic (PLEG): container finished" podID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerID="0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0" exitCode=0 Feb 02 11:08:11 crc kubenswrapper[4845]: I0202 11:08:11.588506 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqwd7" event={"ID":"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4","Type":"ContainerDied","Data":"0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0"} Feb 02 11:08:11 crc kubenswrapper[4845]: I0202 11:08:11.588767 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqwd7" event={"ID":"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4","Type":"ContainerStarted","Data":"4e21bfde1a1ee3b49ab06588e29302cbcb090c8fd115755fa8b2ad925a6ae8aa"} Feb 02 11:08:12 crc kubenswrapper[4845]: I0202 11:08:12.600487 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqwd7" event={"ID":"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4","Type":"ContainerStarted","Data":"24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814"} Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.078526 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vzkq6"] Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.081171 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.100719 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzkq6"] Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.235231 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-utilities\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.235305 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-catalog-content\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.235473 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwvj\" (UniqueName: \"kubernetes.io/projected/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-kube-api-access-jrwvj\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.337835 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwvj\" (UniqueName: \"kubernetes.io/projected/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-kube-api-access-jrwvj\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.338172 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-utilities\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.338218 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-catalog-content\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.338680 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-utilities\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.338788 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-catalog-content\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.361392 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwvj\" (UniqueName: \"kubernetes.io/projected/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-kube-api-access-jrwvj\") pod \"redhat-operators-vzkq6\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.399687 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.629203 4845 generic.go:334] "Generic (PLEG): container finished" podID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerID="24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814" exitCode=0 Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.629355 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqwd7" event={"ID":"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4","Type":"ContainerDied","Data":"24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814"} Feb 02 11:08:13 crc kubenswrapper[4845]: I0202 11:08:13.960350 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzkq6"] Feb 02 11:08:14 crc kubenswrapper[4845]: I0202 11:08:14.641575 4845 generic.go:334] "Generic (PLEG): container finished" podID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerID="4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486" exitCode=0 Feb 02 11:08:14 crc kubenswrapper[4845]: I0202 11:08:14.641675 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzkq6" event={"ID":"df0bb4b2-0075-4535-83d7-ff2a511bcfc4","Type":"ContainerDied","Data":"4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486"} Feb 02 11:08:14 crc kubenswrapper[4845]: I0202 11:08:14.642217 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzkq6" event={"ID":"df0bb4b2-0075-4535-83d7-ff2a511bcfc4","Type":"ContainerStarted","Data":"498eff50f0467701d7486f7e5e6cf9b36286f8e1d73a699ceb16df7b8ad64222"} Feb 02 11:08:14 crc kubenswrapper[4845]: I0202 11:08:14.648199 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqwd7" event={"ID":"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4","Type":"ContainerStarted","Data":"b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4"} Feb 02 11:08:14 crc kubenswrapper[4845]: I0202 11:08:14.700711 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qqwd7" podStartSLOduration=3.180613261 podStartE2EDuration="5.700665572s" podCreationTimestamp="2026-02-02 11:08:09 +0000 UTC" firstStartedPulling="2026-02-02 11:08:11.591379005 +0000 UTC m=+2172.682780455" lastFinishedPulling="2026-02-02 11:08:14.111431326 +0000 UTC m=+2175.202832766" observedRunningTime="2026-02-02 11:08:14.682044148 +0000 UTC m=+2175.773445598" watchObservedRunningTime="2026-02-02 11:08:14.700665572 +0000 UTC m=+2175.792067022" Feb 02 11:08:15 crc kubenswrapper[4845]: I0202 11:08:15.678528 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzkq6" event={"ID":"df0bb4b2-0075-4535-83d7-ff2a511bcfc4","Type":"ContainerStarted","Data":"1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6"} Feb 02 11:08:16 crc kubenswrapper[4845]: I0202 11:08:16.237632 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:08:16 crc kubenswrapper[4845]: I0202 11:08:16.237695 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:08:18 crc kubenswrapper[4845]: I0202 11:08:18.711604 4845 generic.go:334] "Generic (PLEG): container finished" podID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerID="1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6" exitCode=0 Feb 02 11:08:18 crc kubenswrapper[4845]: I0202 11:08:18.711683 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzkq6" event={"ID":"df0bb4b2-0075-4535-83d7-ff2a511bcfc4","Type":"ContainerDied","Data":"1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6"} Feb 02 11:08:19 crc kubenswrapper[4845]: I0202 11:08:19.728990 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzkq6" event={"ID":"df0bb4b2-0075-4535-83d7-ff2a511bcfc4","Type":"ContainerStarted","Data":"636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6"} Feb 02 11:08:19 crc kubenswrapper[4845]: I0202 11:08:19.767148 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vzkq6" podStartSLOduration=2.217484662 podStartE2EDuration="6.767131519s" podCreationTimestamp="2026-02-02 11:08:13 +0000 UTC" firstStartedPulling="2026-02-02 11:08:14.644442101 +0000 UTC m=+2175.735843551" lastFinishedPulling="2026-02-02 11:08:19.194088958 +0000 UTC m=+2180.285490408" observedRunningTime="2026-02-02 11:08:19.748601919 +0000 UTC m=+2180.840003369" watchObservedRunningTime="2026-02-02 11:08:19.767131519 +0000 UTC m=+2180.858532969" Feb 02 11:08:20 crc kubenswrapper[4845]: I0202 11:08:20.220220 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:20 crc kubenswrapper[4845]: I0202 11:08:20.220627 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:20 crc kubenswrapper[4845]: I0202 11:08:20.283959 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:20 crc kubenswrapper[4845]: I0202 11:08:20.802664 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:21 crc kubenswrapper[4845]: I0202 11:08:21.470641 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qqwd7"] Feb 02 11:08:22 crc kubenswrapper[4845]: I0202 11:08:22.765339 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qqwd7" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerName="registry-server" containerID="cri-o://b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4" gracePeriod=2 Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.298657 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.400910 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.400977 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.411644 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-catalog-content\") pod \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.411787 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-utilities\") pod \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.411874 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcbwq\" (UniqueName: \"kubernetes.io/projected/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-kube-api-access-qcbwq\") pod \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\" (UID: \"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4\") " Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.414646 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-utilities" (OuterVolumeSpecName: "utilities") pod "ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" (UID: "ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.424480 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-kube-api-access-qcbwq" (OuterVolumeSpecName: "kube-api-access-qcbwq") pod "ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" (UID: "ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4"). InnerVolumeSpecName "kube-api-access-qcbwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.467331 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" (UID: "ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.514729 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.514771 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcbwq\" (UniqueName: \"kubernetes.io/projected/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-kube-api-access-qcbwq\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.514781 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.790514 4845 generic.go:334] "Generic (PLEG): container finished" podID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerID="b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4" exitCode=0 Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.790560 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqwd7" event={"ID":"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4","Type":"ContainerDied","Data":"b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4"} Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.790587 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqwd7" event={"ID":"ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4","Type":"ContainerDied","Data":"4e21bfde1a1ee3b49ab06588e29302cbcb090c8fd115755fa8b2ad925a6ae8aa"} Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.790606 4845 scope.go:117] "RemoveContainer" containerID="b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.790730 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqwd7" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.818701 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qqwd7"] Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.823983 4845 scope.go:117] "RemoveContainer" containerID="24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.828928 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qqwd7"] Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.854419 4845 scope.go:117] "RemoveContainer" containerID="0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.934017 4845 scope.go:117] "RemoveContainer" containerID="b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4" Feb 02 11:08:23 crc kubenswrapper[4845]: E0202 11:08:23.934801 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4\": container with ID starting with b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4 not found: ID does not exist" containerID="b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.934858 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4"} err="failed to get container status \"b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4\": rpc error: code = NotFound desc = could not find container \"b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4\": container with ID starting with b4be15d794a32e41d2e3bb12536560ef1eef2d87d720ceb166d9996a9713dfb4 not found: ID does not exist" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.934994 4845 scope.go:117] "RemoveContainer" containerID="24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814" Feb 02 11:08:23 crc kubenswrapper[4845]: E0202 11:08:23.935495 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814\": container with ID starting with 24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814 not found: ID does not exist" containerID="24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.935530 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814"} err="failed to get container status \"24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814\": rpc error: code = NotFound desc = could not find container \"24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814\": container with ID starting with 24974c786c52f7413944db868bbd3565db04e2c49ae3260312b96556740b6814 not found: ID does not exist" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.935554 4845 scope.go:117] "RemoveContainer" containerID="0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0" Feb 02 11:08:23 crc kubenswrapper[4845]: E0202 11:08:23.935916 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0\": container with ID starting with 0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0 not found: ID does not exist" containerID="0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0" Feb 02 11:08:23 crc kubenswrapper[4845]: I0202 11:08:23.935937 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0"} err="failed to get container status \"0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0\": rpc error: code = NotFound desc = could not find container \"0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0\": container with ID starting with 0a23edc901a67b4f93b00fd9d012f9a5dd5b1c900604091e293e58e20c4e3ee0 not found: ID does not exist" Feb 02 11:08:24 crc kubenswrapper[4845]: I0202 11:08:24.452792 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vzkq6" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="registry-server" probeResult="failure" output=< Feb 02 11:08:24 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 11:08:24 crc kubenswrapper[4845]: > Feb 02 11:08:25 crc kubenswrapper[4845]: I0202 11:08:25.727681 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" path="/var/lib/kubelet/pods/ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4/volumes" Feb 02 11:08:33 crc kubenswrapper[4845]: I0202 11:08:33.458749 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:33 crc kubenswrapper[4845]: I0202 11:08:33.526428 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:33 crc kubenswrapper[4845]: I0202 11:08:33.702614 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzkq6"] Feb 02 11:08:34 crc kubenswrapper[4845]: I0202 11:08:34.939300 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vzkq6" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="registry-server" containerID="cri-o://636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6" gracePeriod=2 Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.469273 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.617364 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrwvj\" (UniqueName: \"kubernetes.io/projected/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-kube-api-access-jrwvj\") pod \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.617833 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-catalog-content\") pod \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.617877 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-utilities\") pod \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\" (UID: \"df0bb4b2-0075-4535-83d7-ff2a511bcfc4\") " Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.618814 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-utilities" (OuterVolumeSpecName: "utilities") pod "df0bb4b2-0075-4535-83d7-ff2a511bcfc4" (UID: "df0bb4b2-0075-4535-83d7-ff2a511bcfc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.619157 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.625218 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-kube-api-access-jrwvj" (OuterVolumeSpecName: "kube-api-access-jrwvj") pod "df0bb4b2-0075-4535-83d7-ff2a511bcfc4" (UID: "df0bb4b2-0075-4535-83d7-ff2a511bcfc4"). InnerVolumeSpecName "kube-api-access-jrwvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.722595 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrwvj\" (UniqueName: \"kubernetes.io/projected/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-kube-api-access-jrwvj\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.764199 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df0bb4b2-0075-4535-83d7-ff2a511bcfc4" (UID: "df0bb4b2-0075-4535-83d7-ff2a511bcfc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.824951 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df0bb4b2-0075-4535-83d7-ff2a511bcfc4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.952285 4845 generic.go:334] "Generic (PLEG): container finished" podID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerID="636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6" exitCode=0 Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.952363 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzkq6" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.952378 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzkq6" event={"ID":"df0bb4b2-0075-4535-83d7-ff2a511bcfc4","Type":"ContainerDied","Data":"636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6"} Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.953651 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzkq6" event={"ID":"df0bb4b2-0075-4535-83d7-ff2a511bcfc4","Type":"ContainerDied","Data":"498eff50f0467701d7486f7e5e6cf9b36286f8e1d73a699ceb16df7b8ad64222"} Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.953675 4845 scope.go:117] "RemoveContainer" containerID="636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6" Feb 02 11:08:35 crc kubenswrapper[4845]: I0202 11:08:35.974620 4845 scope.go:117] "RemoveContainer" containerID="1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6" Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.003517 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzkq6"] Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.016404 4845 scope.go:117] "RemoveContainer" containerID="4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486" Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.021610 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vzkq6"] Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.066475 4845 scope.go:117] "RemoveContainer" containerID="636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6" Feb 02 11:08:36 crc kubenswrapper[4845]: E0202 11:08:36.067100 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6\": container with ID starting with 636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6 not found: ID does not exist" containerID="636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6" Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.067232 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6"} err="failed to get container status \"636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6\": rpc error: code = NotFound desc = could not find container \"636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6\": container with ID starting with 636ab465a8078450ddb10fd3aae1702e75d04808550e3cd60e630da84b8ef3c6 not found: ID does not exist" Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.067318 4845 scope.go:117] "RemoveContainer" containerID="1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6" Feb 02 11:08:36 crc kubenswrapper[4845]: E0202 11:08:36.067655 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6\": container with ID starting with 1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6 not found: ID does not exist" containerID="1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6" Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.067751 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6"} err="failed to get container status \"1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6\": rpc error: code = NotFound desc = could not find container \"1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6\": container with ID starting with 1cceb1a8e5ffabf74852f80059be6adcba7a287af9c9ef9744104fb6b4a8faf6 not found: ID does not exist" Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.067843 4845 scope.go:117] "RemoveContainer" containerID="4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486" Feb 02 11:08:36 crc kubenswrapper[4845]: E0202 11:08:36.068331 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486\": container with ID starting with 4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486 not found: ID does not exist" containerID="4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486" Feb 02 11:08:36 crc kubenswrapper[4845]: I0202 11:08:36.068429 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486"} err="failed to get container status \"4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486\": rpc error: code = NotFound desc = could not find container \"4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486\": container with ID starting with 4687b265863fa3fb6587f2d9ada8ed78a0b6527a64946cba7a527aa7c0a55486 not found: ID does not exist" Feb 02 11:08:37 crc kubenswrapper[4845]: I0202 11:08:37.728859 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" path="/var/lib/kubelet/pods/df0bb4b2-0075-4535-83d7-ff2a511bcfc4/volumes" Feb 02 11:08:46 crc kubenswrapper[4845]: I0202 11:08:46.237151 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:08:46 crc kubenswrapper[4845]: I0202 11:08:46.237672 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:08:46 crc kubenswrapper[4845]: I0202 11:08:46.237715 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:08:46 crc kubenswrapper[4845]: I0202 11:08:46.238674 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:08:46 crc kubenswrapper[4845]: I0202 11:08:46.238728 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" gracePeriod=600 Feb 02 11:08:46 crc kubenswrapper[4845]: E0202 11:08:46.367074 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:08:47 crc kubenswrapper[4845]: I0202 11:08:47.075675 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" exitCode=0 Feb 02 11:08:47 crc kubenswrapper[4845]: I0202 11:08:47.075730 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc"} Feb 02 11:08:47 crc kubenswrapper[4845]: I0202 11:08:47.075770 4845 scope.go:117] "RemoveContainer" containerID="0b390714f50dcbc5e75002b4a0473fa9805c797be6ccee1c185ac005a97bc29e" Feb 02 11:08:47 crc kubenswrapper[4845]: I0202 11:08:47.076967 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:08:47 crc kubenswrapper[4845]: E0202 11:08:47.077464 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:08:58 crc kubenswrapper[4845]: I0202 11:08:58.713266 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:08:58 crc kubenswrapper[4845]: E0202 11:08:58.714130 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:09:12 crc kubenswrapper[4845]: I0202 11:09:12.713079 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:09:12 crc kubenswrapper[4845]: E0202 11:09:12.714207 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:09:25 crc kubenswrapper[4845]: I0202 11:09:25.712665 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:09:25 crc kubenswrapper[4845]: E0202 11:09:25.713425 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:09:40 crc kubenswrapper[4845]: I0202 11:09:40.713123 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:09:40 crc kubenswrapper[4845]: E0202 11:09:40.713924 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:09:52 crc kubenswrapper[4845]: I0202 11:09:52.713635 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:09:52 crc kubenswrapper[4845]: E0202 11:09:52.714429 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:10:03 crc kubenswrapper[4845]: I0202 11:10:03.713794 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:10:03 crc kubenswrapper[4845]: E0202 11:10:03.715376 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:10:17 crc kubenswrapper[4845]: I0202 11:10:17.713367 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:10:17 crc kubenswrapper[4845]: E0202 11:10:17.714712 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:10:29 crc kubenswrapper[4845]: I0202 11:10:29.718763 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:10:29 crc kubenswrapper[4845]: E0202 11:10:29.719481 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:10:43 crc kubenswrapper[4845]: I0202 11:10:43.713500 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:10:43 crc kubenswrapper[4845]: E0202 11:10:43.714324 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:10:54 crc kubenswrapper[4845]: I0202 11:10:54.713524 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:10:54 crc kubenswrapper[4845]: E0202 11:10:54.714383 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:11:09 crc kubenswrapper[4845]: I0202 11:11:09.721124 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:11:09 crc kubenswrapper[4845]: E0202 11:11:09.722168 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:11:20 crc kubenswrapper[4845]: I0202 11:11:20.714776 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:11:20 crc kubenswrapper[4845]: E0202 11:11:20.716744 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:11:32 crc kubenswrapper[4845]: I0202 11:11:32.712834 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:11:32 crc kubenswrapper[4845]: E0202 11:11:32.713730 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:11:46 crc kubenswrapper[4845]: I0202 11:11:46.712583 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:11:46 crc kubenswrapper[4845]: E0202 11:11:46.713364 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:11:59 crc kubenswrapper[4845]: I0202 11:11:59.723368 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:11:59 crc kubenswrapper[4845]: E0202 11:11:59.734104 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:12:11 crc kubenswrapper[4845]: I0202 11:12:11.712866 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:12:11 crc kubenswrapper[4845]: E0202 11:12:11.713939 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:12:26 crc kubenswrapper[4845]: I0202 11:12:26.713328 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:12:26 crc kubenswrapper[4845]: E0202 11:12:26.714311 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:12:40 crc kubenswrapper[4845]: I0202 11:12:40.713089 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:12:40 crc kubenswrapper[4845]: E0202 11:12:40.713846 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:12:51 crc kubenswrapper[4845]: I0202 11:12:51.712845 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:12:51 crc kubenswrapper[4845]: E0202 11:12:51.713574 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:13:02 crc kubenswrapper[4845]: I0202 11:13:02.712520 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:13:02 crc kubenswrapper[4845]: E0202 11:13:02.713229 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:13:13 crc kubenswrapper[4845]: I0202 11:13:13.712641 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:13:13 crc kubenswrapper[4845]: E0202 11:13:13.713587 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:13:27 crc kubenswrapper[4845]: I0202 11:13:27.712344 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:13:27 crc kubenswrapper[4845]: E0202 11:13:27.713335 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:13:38 crc kubenswrapper[4845]: I0202 11:13:38.714021 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:13:38 crc kubenswrapper[4845]: E0202 11:13:38.715394 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:13:52 crc kubenswrapper[4845]: I0202 11:13:52.714053 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:13:53 crc kubenswrapper[4845]: I0202 11:13:53.247639 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"b01c1f6b8a2da00bbbdac89a515179f56a7b50789ec892275046cb267212a033"} Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.154024 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn"] Feb 02 11:15:00 crc kubenswrapper[4845]: E0202 11:15:00.165001 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="extract-utilities" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.165047 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="extract-utilities" Feb 02 11:15:00 crc kubenswrapper[4845]: E0202 11:15:00.165075 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerName="extract-utilities" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.165084 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerName="extract-utilities" Feb 02 11:15:00 crc kubenswrapper[4845]: E0202 11:15:00.165128 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerName="registry-server" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.165138 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerName="registry-server" Feb 02 11:15:00 crc kubenswrapper[4845]: E0202 11:15:00.165173 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="registry-server" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.165182 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="registry-server" Feb 02 11:15:00 crc kubenswrapper[4845]: E0202 11:15:00.165208 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="extract-content" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.165216 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="extract-content" Feb 02 11:15:00 crc kubenswrapper[4845]: E0202 11:15:00.165248 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerName="extract-content" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.165256 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerName="extract-content" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.166035 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3c8b0c-a23c-4e80-b25d-b35e1ee487b4" containerName="registry-server" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.166073 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0bb4b2-0075-4535-83d7-ff2a511bcfc4" containerName="registry-server" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.167997 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.175903 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.177037 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.183209 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn"] Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.301452 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-secret-volume\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.301526 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-config-volume\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.301607 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfgm2\" (UniqueName: \"kubernetes.io/projected/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-kube-api-access-qfgm2\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.407251 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfgm2\" (UniqueName: \"kubernetes.io/projected/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-kube-api-access-qfgm2\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.407485 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-secret-volume\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.407521 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-config-volume\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.408702 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-config-volume\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.415048 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-secret-volume\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.430287 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfgm2\" (UniqueName: \"kubernetes.io/projected/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-kube-api-access-qfgm2\") pod \"collect-profiles-29500515-lbbxn\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.505830 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:00 crc kubenswrapper[4845]: I0202 11:15:00.976797 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn"] Feb 02 11:15:01 crc kubenswrapper[4845]: I0202 11:15:01.926743 4845 generic.go:334] "Generic (PLEG): container finished" podID="dfe7b56f-4954-457d-8bb8-a0a50096cfb9" containerID="7f56eeae3b5e853cc5e5d1ab4c3fe6f56e0c913955d2cd95163f2033cd7e1417" exitCode=0 Feb 02 11:15:01 crc kubenswrapper[4845]: I0202 11:15:01.926814 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" event={"ID":"dfe7b56f-4954-457d-8bb8-a0a50096cfb9","Type":"ContainerDied","Data":"7f56eeae3b5e853cc5e5d1ab4c3fe6f56e0c913955d2cd95163f2033cd7e1417"} Feb 02 11:15:01 crc kubenswrapper[4845]: I0202 11:15:01.927150 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" event={"ID":"dfe7b56f-4954-457d-8bb8-a0a50096cfb9","Type":"ContainerStarted","Data":"9ec417b0fb92ea7a169c394bf23e5e292e5d100d941622f9d3876ee0cfbd387d"} Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.450634 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.553053 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-config-volume\") pod \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.553219 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfgm2\" (UniqueName: \"kubernetes.io/projected/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-kube-api-access-qfgm2\") pod \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.553289 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-secret-volume\") pod \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\" (UID: \"dfe7b56f-4954-457d-8bb8-a0a50096cfb9\") " Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.553829 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-config-volume" (OuterVolumeSpecName: "config-volume") pod "dfe7b56f-4954-457d-8bb8-a0a50096cfb9" (UID: "dfe7b56f-4954-457d-8bb8-a0a50096cfb9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.554202 4845 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.562656 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-kube-api-access-qfgm2" (OuterVolumeSpecName: "kube-api-access-qfgm2") pod "dfe7b56f-4954-457d-8bb8-a0a50096cfb9" (UID: "dfe7b56f-4954-457d-8bb8-a0a50096cfb9"). InnerVolumeSpecName "kube-api-access-qfgm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.569683 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dfe7b56f-4954-457d-8bb8-a0a50096cfb9" (UID: "dfe7b56f-4954-457d-8bb8-a0a50096cfb9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.655973 4845 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.656014 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfgm2\" (UniqueName: \"kubernetes.io/projected/dfe7b56f-4954-457d-8bb8-a0a50096cfb9-kube-api-access-qfgm2\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.947949 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" event={"ID":"dfe7b56f-4954-457d-8bb8-a0a50096cfb9","Type":"ContainerDied","Data":"9ec417b0fb92ea7a169c394bf23e5e292e5d100d941622f9d3876ee0cfbd387d"} Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.948000 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec417b0fb92ea7a169c394bf23e5e292e5d100d941622f9d3876ee0cfbd387d" Feb 02 11:15:03 crc kubenswrapper[4845]: I0202 11:15:03.948001 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn" Feb 02 11:15:04 crc kubenswrapper[4845]: I0202 11:15:04.533824 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg"] Feb 02 11:15:04 crc kubenswrapper[4845]: I0202 11:15:04.542959 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-ncqjg"] Feb 02 11:15:04 crc kubenswrapper[4845]: I0202 11:15:04.985000 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvh42"] Feb 02 11:15:04 crc kubenswrapper[4845]: E0202 11:15:04.985668 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe7b56f-4954-457d-8bb8-a0a50096cfb9" containerName="collect-profiles" Feb 02 11:15:04 crc kubenswrapper[4845]: I0202 11:15:04.985688 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe7b56f-4954-457d-8bb8-a0a50096cfb9" containerName="collect-profiles" Feb 02 11:15:04 crc kubenswrapper[4845]: I0202 11:15:04.985988 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe7b56f-4954-457d-8bb8-a0a50096cfb9" containerName="collect-profiles" Feb 02 11:15:04 crc kubenswrapper[4845]: I0202 11:15:04.987940 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:04 crc kubenswrapper[4845]: I0202 11:15:04.995953 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvh42"] Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.091373 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-catalog-content\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.091452 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2sz\" (UniqueName: \"kubernetes.io/projected/869bb2a7-f892-4849-ac0e-e221a7987251-kube-api-access-qq2sz\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.091582 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-utilities\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.194472 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-utilities\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.194699 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-catalog-content\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.194741 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq2sz\" (UniqueName: \"kubernetes.io/projected/869bb2a7-f892-4849-ac0e-e221a7987251-kube-api-access-qq2sz\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.195077 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-utilities\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.195204 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-catalog-content\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.212784 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq2sz\" (UniqueName: \"kubernetes.io/projected/869bb2a7-f892-4849-ac0e-e221a7987251-kube-api-access-qq2sz\") pod \"redhat-marketplace-rvh42\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.310781 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.727789 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bde55e-4121-4b71-b6f4-6cb3a9acd82e" path="/var/lib/kubelet/pods/30bde55e-4121-4b71-b6f4-6cb3a9acd82e/volumes" Feb 02 11:15:05 crc kubenswrapper[4845]: W0202 11:15:05.857110 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869bb2a7_f892_4849_ac0e_e221a7987251.slice/crio-ed60c4aa373b4a91c9b09fc640177f562ba99e067409a68ddc930f16c13babac WatchSource:0}: Error finding container ed60c4aa373b4a91c9b09fc640177f562ba99e067409a68ddc930f16c13babac: Status 404 returned error can't find the container with id ed60c4aa373b4a91c9b09fc640177f562ba99e067409a68ddc930f16c13babac Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.857881 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvh42"] Feb 02 11:15:05 crc kubenswrapper[4845]: I0202 11:15:05.967910 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvh42" event={"ID":"869bb2a7-f892-4849-ac0e-e221a7987251","Type":"ContainerStarted","Data":"ed60c4aa373b4a91c9b09fc640177f562ba99e067409a68ddc930f16c13babac"} Feb 02 11:15:06 crc kubenswrapper[4845]: I0202 11:15:06.979802 4845 generic.go:334] "Generic (PLEG): container finished" podID="869bb2a7-f892-4849-ac0e-e221a7987251" containerID="26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50" exitCode=0 Feb 02 11:15:06 crc kubenswrapper[4845]: I0202 11:15:06.979940 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvh42" event={"ID":"869bb2a7-f892-4849-ac0e-e221a7987251","Type":"ContainerDied","Data":"26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50"} Feb 02 11:15:06 crc kubenswrapper[4845]: I0202 11:15:06.984205 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:15:07 crc kubenswrapper[4845]: I0202 11:15:07.992547 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvh42" event={"ID":"869bb2a7-f892-4849-ac0e-e221a7987251","Type":"ContainerStarted","Data":"9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7"} Feb 02 11:15:10 crc kubenswrapper[4845]: I0202 11:15:10.016599 4845 generic.go:334] "Generic (PLEG): container finished" podID="869bb2a7-f892-4849-ac0e-e221a7987251" containerID="9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7" exitCode=0 Feb 02 11:15:10 crc kubenswrapper[4845]: I0202 11:15:10.016702 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvh42" event={"ID":"869bb2a7-f892-4849-ac0e-e221a7987251","Type":"ContainerDied","Data":"9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7"} Feb 02 11:15:10 crc kubenswrapper[4845]: I0202 11:15:10.820090 4845 scope.go:117] "RemoveContainer" containerID="523a5bcb42e2050e61412c7d355eedcd8752caa8ebcb7f962918e3ce965038aa" Feb 02 11:15:11 crc kubenswrapper[4845]: I0202 11:15:11.028118 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvh42" event={"ID":"869bb2a7-f892-4849-ac0e-e221a7987251","Type":"ContainerStarted","Data":"68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0"} Feb 02 11:15:15 crc kubenswrapper[4845]: I0202 11:15:15.312020 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:15 crc kubenswrapper[4845]: I0202 11:15:15.312379 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:15 crc kubenswrapper[4845]: I0202 11:15:15.360399 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:15 crc kubenswrapper[4845]: I0202 11:15:15.382089 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvh42" podStartSLOduration=7.82161947 podStartE2EDuration="11.382069045s" podCreationTimestamp="2026-02-02 11:15:04 +0000 UTC" firstStartedPulling="2026-02-02 11:15:06.98380352 +0000 UTC m=+2588.075204980" lastFinishedPulling="2026-02-02 11:15:10.544253105 +0000 UTC m=+2591.635654555" observedRunningTime="2026-02-02 11:15:11.045043649 +0000 UTC m=+2592.136445119" watchObservedRunningTime="2026-02-02 11:15:15.382069045 +0000 UTC m=+2596.473470495" Feb 02 11:15:16 crc kubenswrapper[4845]: I0202 11:15:16.191343 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:16 crc kubenswrapper[4845]: I0202 11:15:16.270126 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvh42"] Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.101213 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvh42" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" containerName="registry-server" containerID="cri-o://68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0" gracePeriod=2 Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.635455 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.729105 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq2sz\" (UniqueName: \"kubernetes.io/projected/869bb2a7-f892-4849-ac0e-e221a7987251-kube-api-access-qq2sz\") pod \"869bb2a7-f892-4849-ac0e-e221a7987251\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.729282 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-catalog-content\") pod \"869bb2a7-f892-4849-ac0e-e221a7987251\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.729335 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-utilities\") pod \"869bb2a7-f892-4849-ac0e-e221a7987251\" (UID: \"869bb2a7-f892-4849-ac0e-e221a7987251\") " Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.730308 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-utilities" (OuterVolumeSpecName: "utilities") pod "869bb2a7-f892-4849-ac0e-e221a7987251" (UID: "869bb2a7-f892-4849-ac0e-e221a7987251"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.735081 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869bb2a7-f892-4849-ac0e-e221a7987251-kube-api-access-qq2sz" (OuterVolumeSpecName: "kube-api-access-qq2sz") pod "869bb2a7-f892-4849-ac0e-e221a7987251" (UID: "869bb2a7-f892-4849-ac0e-e221a7987251"). InnerVolumeSpecName "kube-api-access-qq2sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.757400 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "869bb2a7-f892-4849-ac0e-e221a7987251" (UID: "869bb2a7-f892-4849-ac0e-e221a7987251"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.833864 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq2sz\" (UniqueName: \"kubernetes.io/projected/869bb2a7-f892-4849-ac0e-e221a7987251-kube-api-access-qq2sz\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.834075 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:18 crc kubenswrapper[4845]: I0202 11:15:18.834095 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869bb2a7-f892-4849-ac0e-e221a7987251-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.114923 4845 generic.go:334] "Generic (PLEG): container finished" podID="869bb2a7-f892-4849-ac0e-e221a7987251" containerID="68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0" exitCode=0 Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.114987 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvh42" event={"ID":"869bb2a7-f892-4849-ac0e-e221a7987251","Type":"ContainerDied","Data":"68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0"} Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.115019 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvh42" event={"ID":"869bb2a7-f892-4849-ac0e-e221a7987251","Type":"ContainerDied","Data":"ed60c4aa373b4a91c9b09fc640177f562ba99e067409a68ddc930f16c13babac"} Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.115036 4845 scope.go:117] "RemoveContainer" containerID="68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.115192 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvh42" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.152334 4845 scope.go:117] "RemoveContainer" containerID="9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.190057 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvh42"] Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.198283 4845 scope.go:117] "RemoveContainer" containerID="26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.227751 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvh42"] Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.274827 4845 scope.go:117] "RemoveContainer" containerID="68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0" Feb 02 11:15:19 crc kubenswrapper[4845]: E0202 11:15:19.275554 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0\": container with ID starting with 68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0 not found: ID does not exist" containerID="68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.275601 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0"} err="failed to get container status \"68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0\": rpc error: code = NotFound desc = could not find container \"68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0\": container with ID starting with 68cd5f0ac01a36ca7a9ded3a0b4c9ef97c96705fc1fd0960be62ed0ce82d05e0 not found: ID does not exist" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.275628 4845 scope.go:117] "RemoveContainer" containerID="9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7" Feb 02 11:15:19 crc kubenswrapper[4845]: E0202 11:15:19.276378 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7\": container with ID starting with 9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7 not found: ID does not exist" containerID="9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.276505 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7"} err="failed to get container status \"9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7\": rpc error: code = NotFound desc = could not find container \"9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7\": container with ID starting with 9c6da0a6bdf6ad9ffea1463a6599823f61c5157e65caf9cbdab9698c761dd0b7 not found: ID does not exist" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.276606 4845 scope.go:117] "RemoveContainer" containerID="26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50" Feb 02 11:15:19 crc kubenswrapper[4845]: E0202 11:15:19.277126 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50\": container with ID starting with 26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50 not found: ID does not exist" containerID="26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.277159 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50"} err="failed to get container status \"26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50\": rpc error: code = NotFound desc = could not find container \"26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50\": container with ID starting with 26db0c558d019a5675900b1570f42ffba048eb1fb33377cdc063d50635f56e50 not found: ID does not exist" Feb 02 11:15:19 crc kubenswrapper[4845]: I0202 11:15:19.729705 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" path="/var/lib/kubelet/pods/869bb2a7-f892-4849-ac0e-e221a7987251/volumes" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.433179 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5s2tp"] Feb 02 11:15:57 crc kubenswrapper[4845]: E0202 11:15:57.434315 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" containerName="registry-server" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.434332 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" containerName="registry-server" Feb 02 11:15:57 crc kubenswrapper[4845]: E0202 11:15:57.434360 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" containerName="extract-content" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.434368 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" containerName="extract-content" Feb 02 11:15:57 crc kubenswrapper[4845]: E0202 11:15:57.434414 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" containerName="extract-utilities" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.434423 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" containerName="extract-utilities" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.434715 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="869bb2a7-f892-4849-ac0e-e221a7987251" containerName="registry-server" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.436781 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.445001 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5s2tp"] Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.570476 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-utilities\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.572911 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjbm\" (UniqueName: \"kubernetes.io/projected/6565f54e-2b32-49c5-bcca-06363f5bd2cb-kube-api-access-8jjbm\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.572988 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-catalog-content\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.676410 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-utilities\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.676527 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jjbm\" (UniqueName: \"kubernetes.io/projected/6565f54e-2b32-49c5-bcca-06363f5bd2cb-kube-api-access-8jjbm\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.676550 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-catalog-content\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.677158 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-utilities\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.677414 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-catalog-content\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.696755 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jjbm\" (UniqueName: \"kubernetes.io/projected/6565f54e-2b32-49c5-bcca-06363f5bd2cb-kube-api-access-8jjbm\") pod \"certified-operators-5s2tp\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:57 crc kubenswrapper[4845]: I0202 11:15:57.770071 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:15:58 crc kubenswrapper[4845]: I0202 11:15:58.346287 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5s2tp"] Feb 02 11:15:58 crc kubenswrapper[4845]: I0202 11:15:58.560693 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s2tp" event={"ID":"6565f54e-2b32-49c5-bcca-06363f5bd2cb","Type":"ContainerStarted","Data":"0eadb76ac2e8cfb4b265a90d64feb6cfe86c5e67914191034e8e058d1dd59529"} Feb 02 11:15:59 crc kubenswrapper[4845]: I0202 11:15:59.572346 4845 generic.go:334] "Generic (PLEG): container finished" podID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerID="64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4" exitCode=0 Feb 02 11:15:59 crc kubenswrapper[4845]: I0202 11:15:59.572409 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s2tp" event={"ID":"6565f54e-2b32-49c5-bcca-06363f5bd2cb","Type":"ContainerDied","Data":"64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4"} Feb 02 11:16:01 crc kubenswrapper[4845]: I0202 11:16:01.597502 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s2tp" event={"ID":"6565f54e-2b32-49c5-bcca-06363f5bd2cb","Type":"ContainerStarted","Data":"3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb"} Feb 02 11:16:02 crc kubenswrapper[4845]: I0202 11:16:02.609475 4845 generic.go:334] "Generic (PLEG): container finished" podID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerID="3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb" exitCode=0 Feb 02 11:16:02 crc kubenswrapper[4845]: I0202 11:16:02.609550 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s2tp" event={"ID":"6565f54e-2b32-49c5-bcca-06363f5bd2cb","Type":"ContainerDied","Data":"3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb"} Feb 02 11:16:03 crc kubenswrapper[4845]: I0202 11:16:03.623473 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s2tp" event={"ID":"6565f54e-2b32-49c5-bcca-06363f5bd2cb","Type":"ContainerStarted","Data":"f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1"} Feb 02 11:16:03 crc kubenswrapper[4845]: I0202 11:16:03.649847 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5s2tp" podStartSLOduration=3.109125584 podStartE2EDuration="6.649802917s" podCreationTimestamp="2026-02-02 11:15:57 +0000 UTC" firstStartedPulling="2026-02-02 11:15:59.574745997 +0000 UTC m=+2640.666147447" lastFinishedPulling="2026-02-02 11:16:03.11542334 +0000 UTC m=+2644.206824780" observedRunningTime="2026-02-02 11:16:03.642646389 +0000 UTC m=+2644.734047839" watchObservedRunningTime="2026-02-02 11:16:03.649802917 +0000 UTC m=+2644.741204367" Feb 02 11:16:07 crc kubenswrapper[4845]: I0202 11:16:07.770491 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:16:07 crc kubenswrapper[4845]: I0202 11:16:07.771086 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:16:07 crc kubenswrapper[4845]: I0202 11:16:07.824167 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:16:08 crc kubenswrapper[4845]: I0202 11:16:08.724874 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:16:08 crc kubenswrapper[4845]: I0202 11:16:08.785344 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5s2tp"] Feb 02 11:16:10 crc kubenswrapper[4845]: I0202 11:16:10.695720 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5s2tp" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerName="registry-server" containerID="cri-o://f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1" gracePeriod=2 Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.219253 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.318572 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jjbm\" (UniqueName: \"kubernetes.io/projected/6565f54e-2b32-49c5-bcca-06363f5bd2cb-kube-api-access-8jjbm\") pod \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.318672 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-utilities\") pod \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.318727 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-catalog-content\") pod \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\" (UID: \"6565f54e-2b32-49c5-bcca-06363f5bd2cb\") " Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.320082 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-utilities" (OuterVolumeSpecName: "utilities") pod "6565f54e-2b32-49c5-bcca-06363f5bd2cb" (UID: "6565f54e-2b32-49c5-bcca-06363f5bd2cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.326005 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6565f54e-2b32-49c5-bcca-06363f5bd2cb-kube-api-access-8jjbm" (OuterVolumeSpecName: "kube-api-access-8jjbm") pod "6565f54e-2b32-49c5-bcca-06363f5bd2cb" (UID: "6565f54e-2b32-49c5-bcca-06363f5bd2cb"). InnerVolumeSpecName "kube-api-access-8jjbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.375160 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6565f54e-2b32-49c5-bcca-06363f5bd2cb" (UID: "6565f54e-2b32-49c5-bcca-06363f5bd2cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.421847 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.421906 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6565f54e-2b32-49c5-bcca-06363f5bd2cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.421926 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jjbm\" (UniqueName: \"kubernetes.io/projected/6565f54e-2b32-49c5-bcca-06363f5bd2cb-kube-api-access-8jjbm\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.735623 4845 generic.go:334] "Generic (PLEG): container finished" podID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerID="f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1" exitCode=0 Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.735673 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s2tp" event={"ID":"6565f54e-2b32-49c5-bcca-06363f5bd2cb","Type":"ContainerDied","Data":"f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1"} Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.735703 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s2tp" event={"ID":"6565f54e-2b32-49c5-bcca-06363f5bd2cb","Type":"ContainerDied","Data":"0eadb76ac2e8cfb4b265a90d64feb6cfe86c5e67914191034e8e058d1dd59529"} Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.735719 4845 scope.go:117] "RemoveContainer" containerID="f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.735735 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s2tp" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.775672 4845 scope.go:117] "RemoveContainer" containerID="3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.828592 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5s2tp"] Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.834056 4845 scope.go:117] "RemoveContainer" containerID="64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.868360 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5s2tp"] Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.953692 4845 scope.go:117] "RemoveContainer" containerID="f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1" Feb 02 11:16:11 crc kubenswrapper[4845]: E0202 11:16:11.956184 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1\": container with ID starting with f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1 not found: ID does not exist" containerID="f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.956231 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1"} err="failed to get container status \"f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1\": rpc error: code = NotFound desc = could not find container \"f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1\": container with ID starting with f4aeaf1febfad31ffcdc67a3c3800e4db1149b548940bc3e4085c7d0f8dfe7d1 not found: ID does not exist" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.956258 4845 scope.go:117] "RemoveContainer" containerID="3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb" Feb 02 11:16:11 crc kubenswrapper[4845]: E0202 11:16:11.956726 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb\": container with ID starting with 3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb not found: ID does not exist" containerID="3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.956769 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb"} err="failed to get container status \"3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb\": rpc error: code = NotFound desc = could not find container \"3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb\": container with ID starting with 3dc191087e15c4b911d66a8f918f69ea24d12a7212ba993341bf4fa750a064cb not found: ID does not exist" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.956788 4845 scope.go:117] "RemoveContainer" containerID="64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4" Feb 02 11:16:11 crc kubenswrapper[4845]: E0202 11:16:11.957087 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4\": container with ID starting with 64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4 not found: ID does not exist" containerID="64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4" Feb 02 11:16:11 crc kubenswrapper[4845]: I0202 11:16:11.957158 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4"} err="failed to get container status \"64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4\": rpc error: code = NotFound desc = could not find container \"64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4\": container with ID starting with 64a1356a4dc372526ff0d42d69502467f92dd313c319f02635f9d19ec0032cf4 not found: ID does not exist" Feb 02 11:16:13 crc kubenswrapper[4845]: I0202 11:16:13.728173 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" path="/var/lib/kubelet/pods/6565f54e-2b32-49c5-bcca-06363f5bd2cb/volumes" Feb 02 11:16:16 crc kubenswrapper[4845]: I0202 11:16:16.237195 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:16:16 crc kubenswrapper[4845]: I0202 11:16:16.237530 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:16:46 crc kubenswrapper[4845]: I0202 11:16:46.237815 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:16:46 crc kubenswrapper[4845]: I0202 11:16:46.238392 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:17:16 crc kubenswrapper[4845]: I0202 11:17:16.238292 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:17:16 crc kubenswrapper[4845]: I0202 11:17:16.238911 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:17:16 crc kubenswrapper[4845]: I0202 11:17:16.238974 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:17:16 crc kubenswrapper[4845]: I0202 11:17:16.240474 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b01c1f6b8a2da00bbbdac89a515179f56a7b50789ec892275046cb267212a033"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:17:16 crc kubenswrapper[4845]: I0202 11:17:16.240586 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://b01c1f6b8a2da00bbbdac89a515179f56a7b50789ec892275046cb267212a033" gracePeriod=600 Feb 02 11:17:16 crc kubenswrapper[4845]: I0202 11:17:16.419987 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="b01c1f6b8a2da00bbbdac89a515179f56a7b50789ec892275046cb267212a033" exitCode=0 Feb 02 11:17:16 crc kubenswrapper[4845]: I0202 11:17:16.420076 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"b01c1f6b8a2da00bbbdac89a515179f56a7b50789ec892275046cb267212a033"} Feb 02 11:17:16 crc kubenswrapper[4845]: I0202 11:17:16.420600 4845 scope.go:117] "RemoveContainer" containerID="1ba20d6a21bdd300599d4a9526d3a1de22ef5453da0f4e550c24dab0415a92cc" Feb 02 11:17:17 crc kubenswrapper[4845]: I0202 11:17:17.435367 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c"} Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.094570 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sn4l9"] Feb 02 11:18:16 crc kubenswrapper[4845]: E0202 11:18:16.095777 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerName="extract-content" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.095793 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerName="extract-content" Feb 02 11:18:16 crc kubenswrapper[4845]: E0202 11:18:16.095814 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerName="extract-utilities" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.095824 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerName="extract-utilities" Feb 02 11:18:16 crc kubenswrapper[4845]: E0202 11:18:16.095848 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerName="registry-server" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.095857 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerName="registry-server" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.096158 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="6565f54e-2b32-49c5-bcca-06363f5bd2cb" containerName="registry-server" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.099777 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.105009 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sn4l9"] Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.221203 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-utilities\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.221490 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-catalog-content\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.221560 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx5b8\" (UniqueName: \"kubernetes.io/projected/767d7a70-a583-4d16-abd2-675171ae5138-kube-api-access-jx5b8\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.324016 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-utilities\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.324246 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-catalog-content\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.324316 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx5b8\" (UniqueName: \"kubernetes.io/projected/767d7a70-a583-4d16-abd2-675171ae5138-kube-api-access-jx5b8\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.324448 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-utilities\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.324599 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-catalog-content\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.347668 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx5b8\" (UniqueName: \"kubernetes.io/projected/767d7a70-a583-4d16-abd2-675171ae5138-kube-api-access-jx5b8\") pod \"community-operators-sn4l9\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:16 crc kubenswrapper[4845]: I0202 11:18:16.436083 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:17 crc kubenswrapper[4845]: I0202 11:18:17.005272 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sn4l9"] Feb 02 11:18:17 crc kubenswrapper[4845]: I0202 11:18:17.120815 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn4l9" event={"ID":"767d7a70-a583-4d16-abd2-675171ae5138","Type":"ContainerStarted","Data":"661ef6cebe116cb881e59387e7f3437734bcb869e84682751ec539128c5e9d35"} Feb 02 11:18:18 crc kubenswrapper[4845]: I0202 11:18:18.131203 4845 generic.go:334] "Generic (PLEG): container finished" podID="767d7a70-a583-4d16-abd2-675171ae5138" containerID="1772d1d13d50f55945231fffbf3a4f639758727f28938da97d680de269b43331" exitCode=0 Feb 02 11:18:18 crc kubenswrapper[4845]: I0202 11:18:18.131268 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn4l9" event={"ID":"767d7a70-a583-4d16-abd2-675171ae5138","Type":"ContainerDied","Data":"1772d1d13d50f55945231fffbf3a4f639758727f28938da97d680de269b43331"} Feb 02 11:18:19 crc kubenswrapper[4845]: I0202 11:18:19.157765 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn4l9" event={"ID":"767d7a70-a583-4d16-abd2-675171ae5138","Type":"ContainerStarted","Data":"86de663521576bead672c95842f6ca66e30909f7018fa47197e8fe7a74d69cdc"} Feb 02 11:18:20 crc kubenswrapper[4845]: I0202 11:18:20.169679 4845 generic.go:334] "Generic (PLEG): container finished" podID="767d7a70-a583-4d16-abd2-675171ae5138" containerID="86de663521576bead672c95842f6ca66e30909f7018fa47197e8fe7a74d69cdc" exitCode=0 Feb 02 11:18:20 crc kubenswrapper[4845]: I0202 11:18:20.169739 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn4l9" event={"ID":"767d7a70-a583-4d16-abd2-675171ae5138","Type":"ContainerDied","Data":"86de663521576bead672c95842f6ca66e30909f7018fa47197e8fe7a74d69cdc"} Feb 02 11:18:21 crc kubenswrapper[4845]: I0202 11:18:21.186077 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn4l9" event={"ID":"767d7a70-a583-4d16-abd2-675171ae5138","Type":"ContainerStarted","Data":"d5e2157f1eef1e64120ddf4c5a3541e0b1d93226d3ae58eff596372e1ee9778e"} Feb 02 11:18:21 crc kubenswrapper[4845]: I0202 11:18:21.221410 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sn4l9" podStartSLOduration=2.659359963 podStartE2EDuration="5.221371472s" podCreationTimestamp="2026-02-02 11:18:16 +0000 UTC" firstStartedPulling="2026-02-02 11:18:18.13406853 +0000 UTC m=+2779.225469990" lastFinishedPulling="2026-02-02 11:18:20.696080049 +0000 UTC m=+2781.787481499" observedRunningTime="2026-02-02 11:18:21.213400552 +0000 UTC m=+2782.304802002" watchObservedRunningTime="2026-02-02 11:18:21.221371472 +0000 UTC m=+2782.312772922" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.682268 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dmf4h"] Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.686684 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.706467 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dmf4h"] Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.744496 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-catalog-content\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.745504 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcrkv\" (UniqueName: \"kubernetes.io/projected/c6147389-3624-4919-ba37-600b9c23a55e-kube-api-access-bcrkv\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.745560 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-utilities\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.848400 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcrkv\" (UniqueName: \"kubernetes.io/projected/c6147389-3624-4919-ba37-600b9c23a55e-kube-api-access-bcrkv\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.848504 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-utilities\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.848807 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-catalog-content\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.849071 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-utilities\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.849372 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-catalog-content\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:24 crc kubenswrapper[4845]: I0202 11:18:24.867224 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcrkv\" (UniqueName: \"kubernetes.io/projected/c6147389-3624-4919-ba37-600b9c23a55e-kube-api-access-bcrkv\") pod \"redhat-operators-dmf4h\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:25 crc kubenswrapper[4845]: I0202 11:18:25.022840 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:25 crc kubenswrapper[4845]: I0202 11:18:25.562237 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dmf4h"] Feb 02 11:18:26 crc kubenswrapper[4845]: I0202 11:18:26.255761 4845 generic.go:334] "Generic (PLEG): container finished" podID="c6147389-3624-4919-ba37-600b9c23a55e" containerID="ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97" exitCode=0 Feb 02 11:18:26 crc kubenswrapper[4845]: I0202 11:18:26.255847 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmf4h" event={"ID":"c6147389-3624-4919-ba37-600b9c23a55e","Type":"ContainerDied","Data":"ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97"} Feb 02 11:18:26 crc kubenswrapper[4845]: I0202 11:18:26.256188 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmf4h" event={"ID":"c6147389-3624-4919-ba37-600b9c23a55e","Type":"ContainerStarted","Data":"c8397ff7e1a78cb1133cedaa7ed5721a42eb9c766cc160662ea2ab8efe50c5b6"} Feb 02 11:18:26 crc kubenswrapper[4845]: I0202 11:18:26.436941 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:26 crc kubenswrapper[4845]: I0202 11:18:26.437503 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:26 crc kubenswrapper[4845]: I0202 11:18:26.490263 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:27 crc kubenswrapper[4845]: I0202 11:18:27.316202 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:28 crc kubenswrapper[4845]: I0202 11:18:28.278225 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmf4h" event={"ID":"c6147389-3624-4919-ba37-600b9c23a55e","Type":"ContainerStarted","Data":"47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac"} Feb 02 11:18:29 crc kubenswrapper[4845]: I0202 11:18:29.272709 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sn4l9"] Feb 02 11:18:29 crc kubenswrapper[4845]: I0202 11:18:29.288593 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sn4l9" podUID="767d7a70-a583-4d16-abd2-675171ae5138" containerName="registry-server" containerID="cri-o://d5e2157f1eef1e64120ddf4c5a3541e0b1d93226d3ae58eff596372e1ee9778e" gracePeriod=2 Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.301775 4845 generic.go:334] "Generic (PLEG): container finished" podID="767d7a70-a583-4d16-abd2-675171ae5138" containerID="d5e2157f1eef1e64120ddf4c5a3541e0b1d93226d3ae58eff596372e1ee9778e" exitCode=0 Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.301983 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn4l9" event={"ID":"767d7a70-a583-4d16-abd2-675171ae5138","Type":"ContainerDied","Data":"d5e2157f1eef1e64120ddf4c5a3541e0b1d93226d3ae58eff596372e1ee9778e"} Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.755836 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.816857 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-catalog-content\") pod \"767d7a70-a583-4d16-abd2-675171ae5138\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.816993 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-utilities\") pod \"767d7a70-a583-4d16-abd2-675171ae5138\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.817033 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx5b8\" (UniqueName: \"kubernetes.io/projected/767d7a70-a583-4d16-abd2-675171ae5138-kube-api-access-jx5b8\") pod \"767d7a70-a583-4d16-abd2-675171ae5138\" (UID: \"767d7a70-a583-4d16-abd2-675171ae5138\") " Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.818858 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-utilities" (OuterVolumeSpecName: "utilities") pod "767d7a70-a583-4d16-abd2-675171ae5138" (UID: "767d7a70-a583-4d16-abd2-675171ae5138"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.824691 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767d7a70-a583-4d16-abd2-675171ae5138-kube-api-access-jx5b8" (OuterVolumeSpecName: "kube-api-access-jx5b8") pod "767d7a70-a583-4d16-abd2-675171ae5138" (UID: "767d7a70-a583-4d16-abd2-675171ae5138"). InnerVolumeSpecName "kube-api-access-jx5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.869918 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "767d7a70-a583-4d16-abd2-675171ae5138" (UID: "767d7a70-a583-4d16-abd2-675171ae5138"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.919915 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.919960 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767d7a70-a583-4d16-abd2-675171ae5138-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:30 crc kubenswrapper[4845]: I0202 11:18:30.919976 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx5b8\" (UniqueName: \"kubernetes.io/projected/767d7a70-a583-4d16-abd2-675171ae5138-kube-api-access-jx5b8\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:31 crc kubenswrapper[4845]: I0202 11:18:31.317204 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn4l9" event={"ID":"767d7a70-a583-4d16-abd2-675171ae5138","Type":"ContainerDied","Data":"661ef6cebe116cb881e59387e7f3437734bcb869e84682751ec539128c5e9d35"} Feb 02 11:18:31 crc kubenswrapper[4845]: I0202 11:18:31.317269 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn4l9" Feb 02 11:18:31 crc kubenswrapper[4845]: I0202 11:18:31.317631 4845 scope.go:117] "RemoveContainer" containerID="d5e2157f1eef1e64120ddf4c5a3541e0b1d93226d3ae58eff596372e1ee9778e" Feb 02 11:18:31 crc kubenswrapper[4845]: I0202 11:18:31.343275 4845 scope.go:117] "RemoveContainer" containerID="86de663521576bead672c95842f6ca66e30909f7018fa47197e8fe7a74d69cdc" Feb 02 11:18:31 crc kubenswrapper[4845]: I0202 11:18:31.371196 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sn4l9"] Feb 02 11:18:31 crc kubenswrapper[4845]: I0202 11:18:31.388581 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sn4l9"] Feb 02 11:18:31 crc kubenswrapper[4845]: I0202 11:18:31.400010 4845 scope.go:117] "RemoveContainer" containerID="1772d1d13d50f55945231fffbf3a4f639758727f28938da97d680de269b43331" Feb 02 11:18:31 crc kubenswrapper[4845]: I0202 11:18:31.743031 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767d7a70-a583-4d16-abd2-675171ae5138" path="/var/lib/kubelet/pods/767d7a70-a583-4d16-abd2-675171ae5138/volumes" Feb 02 11:18:33 crc kubenswrapper[4845]: I0202 11:18:33.346769 4845 generic.go:334] "Generic (PLEG): container finished" podID="c6147389-3624-4919-ba37-600b9c23a55e" containerID="47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac" exitCode=0 Feb 02 11:18:33 crc kubenswrapper[4845]: I0202 11:18:33.346869 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmf4h" event={"ID":"c6147389-3624-4919-ba37-600b9c23a55e","Type":"ContainerDied","Data":"47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac"} Feb 02 11:18:34 crc kubenswrapper[4845]: I0202 11:18:34.359156 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmf4h" event={"ID":"c6147389-3624-4919-ba37-600b9c23a55e","Type":"ContainerStarted","Data":"77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0"} Feb 02 11:18:34 crc kubenswrapper[4845]: I0202 11:18:34.393818 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dmf4h" podStartSLOduration=2.7911557719999998 podStartE2EDuration="10.393789695s" podCreationTimestamp="2026-02-02 11:18:24 +0000 UTC" firstStartedPulling="2026-02-02 11:18:26.257583029 +0000 UTC m=+2787.348984479" lastFinishedPulling="2026-02-02 11:18:33.860216952 +0000 UTC m=+2794.951618402" observedRunningTime="2026-02-02 11:18:34.381715455 +0000 UTC m=+2795.473116905" watchObservedRunningTime="2026-02-02 11:18:34.393789695 +0000 UTC m=+2795.485191145" Feb 02 11:18:35 crc kubenswrapper[4845]: I0202 11:18:35.023100 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:35 crc kubenswrapper[4845]: I0202 11:18:35.023155 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:36 crc kubenswrapper[4845]: I0202 11:18:36.076840 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dmf4h" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="registry-server" probeResult="failure" output=< Feb 02 11:18:36 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 11:18:36 crc kubenswrapper[4845]: > Feb 02 11:18:46 crc kubenswrapper[4845]: I0202 11:18:46.067985 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dmf4h" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="registry-server" probeResult="failure" output=< Feb 02 11:18:46 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 11:18:46 crc kubenswrapper[4845]: > Feb 02 11:18:55 crc kubenswrapper[4845]: I0202 11:18:55.075112 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:55 crc kubenswrapper[4845]: I0202 11:18:55.141614 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:55 crc kubenswrapper[4845]: I0202 11:18:55.316758 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dmf4h"] Feb 02 11:18:56 crc kubenswrapper[4845]: I0202 11:18:56.576612 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dmf4h" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="registry-server" containerID="cri-o://77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0" gracePeriod=2 Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.149428 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.316487 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-catalog-content\") pod \"c6147389-3624-4919-ba37-600b9c23a55e\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.316571 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-utilities\") pod \"c6147389-3624-4919-ba37-600b9c23a55e\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.316924 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcrkv\" (UniqueName: \"kubernetes.io/projected/c6147389-3624-4919-ba37-600b9c23a55e-kube-api-access-bcrkv\") pod \"c6147389-3624-4919-ba37-600b9c23a55e\" (UID: \"c6147389-3624-4919-ba37-600b9c23a55e\") " Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.317747 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-utilities" (OuterVolumeSpecName: "utilities") pod "c6147389-3624-4919-ba37-600b9c23a55e" (UID: "c6147389-3624-4919-ba37-600b9c23a55e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.318717 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.324599 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6147389-3624-4919-ba37-600b9c23a55e-kube-api-access-bcrkv" (OuterVolumeSpecName: "kube-api-access-bcrkv") pod "c6147389-3624-4919-ba37-600b9c23a55e" (UID: "c6147389-3624-4919-ba37-600b9c23a55e"). InnerVolumeSpecName "kube-api-access-bcrkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.422336 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcrkv\" (UniqueName: \"kubernetes.io/projected/c6147389-3624-4919-ba37-600b9c23a55e-kube-api-access-bcrkv\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.454174 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6147389-3624-4919-ba37-600b9c23a55e" (UID: "c6147389-3624-4919-ba37-600b9c23a55e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.525048 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6147389-3624-4919-ba37-600b9c23a55e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.615246 4845 generic.go:334] "Generic (PLEG): container finished" podID="c6147389-3624-4919-ba37-600b9c23a55e" containerID="77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0" exitCode=0 Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.615321 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmf4h" event={"ID":"c6147389-3624-4919-ba37-600b9c23a55e","Type":"ContainerDied","Data":"77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0"} Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.615410 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmf4h" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.615441 4845 scope.go:117] "RemoveContainer" containerID="77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.615412 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmf4h" event={"ID":"c6147389-3624-4919-ba37-600b9c23a55e","Type":"ContainerDied","Data":"c8397ff7e1a78cb1133cedaa7ed5721a42eb9c766cc160662ea2ab8efe50c5b6"} Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.656997 4845 scope.go:117] "RemoveContainer" containerID="47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.668898 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dmf4h"] Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.679665 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dmf4h"] Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.689417 4845 scope.go:117] "RemoveContainer" containerID="ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.730622 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6147389-3624-4919-ba37-600b9c23a55e" path="/var/lib/kubelet/pods/c6147389-3624-4919-ba37-600b9c23a55e/volumes" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.739654 4845 scope.go:117] "RemoveContainer" containerID="77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0" Feb 02 11:18:57 crc kubenswrapper[4845]: E0202 11:18:57.740315 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0\": container with ID starting with 77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0 not found: ID does not exist" containerID="77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.740350 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0"} err="failed to get container status \"77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0\": rpc error: code = NotFound desc = could not find container \"77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0\": container with ID starting with 77fb0b9dc09d0eb23f38f1fd7e2bef60eb57f4d4c9ed9d4fb9e40e2f9558fad0 not found: ID does not exist" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.740376 4845 scope.go:117] "RemoveContainer" containerID="47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac" Feb 02 11:18:57 crc kubenswrapper[4845]: E0202 11:18:57.740981 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac\": container with ID starting with 47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac not found: ID does not exist" containerID="47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.741081 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac"} err="failed to get container status \"47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac\": rpc error: code = NotFound desc = could not find container \"47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac\": container with ID starting with 47a60e934a8b6791b05d0aafe880dd84949e46ec1a413fece2b0abcc118489ac not found: ID does not exist" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.741118 4845 scope.go:117] "RemoveContainer" containerID="ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97" Feb 02 11:18:57 crc kubenswrapper[4845]: E0202 11:18:57.741475 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97\": container with ID starting with ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97 not found: ID does not exist" containerID="ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97" Feb 02 11:18:57 crc kubenswrapper[4845]: I0202 11:18:57.741498 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97"} err="failed to get container status \"ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97\": rpc error: code = NotFound desc = could not find container \"ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97\": container with ID starting with ee326ff04ed15e3ada7db0063639a37d8fcee16151491f583e9fa6fbd3ca8e97 not found: ID does not exist" Feb 02 11:19:16 crc kubenswrapper[4845]: I0202 11:19:16.238096 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:19:16 crc kubenswrapper[4845]: I0202 11:19:16.238658 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:19:46 crc kubenswrapper[4845]: I0202 11:19:46.237820 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:19:46 crc kubenswrapper[4845]: I0202 11:19:46.238452 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.237661 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.238246 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.238309 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.241525 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.241610 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" gracePeriod=600 Feb 02 11:20:16 crc kubenswrapper[4845]: E0202 11:20:16.365722 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.435468 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" exitCode=0 Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.435517 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c"} Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.435556 4845 scope.go:117] "RemoveContainer" containerID="b01c1f6b8a2da00bbbdac89a515179f56a7b50789ec892275046cb267212a033" Feb 02 11:20:16 crc kubenswrapper[4845]: I0202 11:20:16.436473 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:20:16 crc kubenswrapper[4845]: E0202 11:20:16.436927 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:20:27 crc kubenswrapper[4845]: I0202 11:20:27.712539 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:20:27 crc kubenswrapper[4845]: E0202 11:20:27.713451 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:20:39 crc kubenswrapper[4845]: I0202 11:20:39.721846 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:20:39 crc kubenswrapper[4845]: E0202 11:20:39.722754 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:20:50 crc kubenswrapper[4845]: I0202 11:20:50.712859 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:20:50 crc kubenswrapper[4845]: E0202 11:20:50.713590 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:21:01 crc kubenswrapper[4845]: I0202 11:21:01.714599 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:21:01 crc kubenswrapper[4845]: E0202 11:21:01.715502 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:21:14 crc kubenswrapper[4845]: I0202 11:21:14.713463 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:21:14 crc kubenswrapper[4845]: E0202 11:21:14.714365 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:21:27 crc kubenswrapper[4845]: I0202 11:21:27.720436 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:21:27 crc kubenswrapper[4845]: E0202 11:21:27.722878 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:21:41 crc kubenswrapper[4845]: I0202 11:21:41.717531 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:21:41 crc kubenswrapper[4845]: E0202 11:21:41.718162 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:21:53 crc kubenswrapper[4845]: I0202 11:21:53.713901 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:21:53 crc kubenswrapper[4845]: E0202 11:21:53.715269 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:22:07 crc kubenswrapper[4845]: I0202 11:22:07.713276 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:22:07 crc kubenswrapper[4845]: E0202 11:22:07.714524 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:22:22 crc kubenswrapper[4845]: I0202 11:22:22.713524 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:22:22 crc kubenswrapper[4845]: E0202 11:22:22.714289 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:22:36 crc kubenswrapper[4845]: I0202 11:22:36.712946 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:22:36 crc kubenswrapper[4845]: E0202 11:22:36.713775 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:22:47 crc kubenswrapper[4845]: I0202 11:22:47.713822 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:22:47 crc kubenswrapper[4845]: E0202 11:22:47.714526 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:23:00 crc kubenswrapper[4845]: I0202 11:23:00.712786 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:23:00 crc kubenswrapper[4845]: E0202 11:23:00.713572 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:23:15 crc kubenswrapper[4845]: I0202 11:23:15.712749 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:23:15 crc kubenswrapper[4845]: E0202 11:23:15.713878 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:23:30 crc kubenswrapper[4845]: I0202 11:23:30.713016 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:23:30 crc kubenswrapper[4845]: E0202 11:23:30.714989 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:23:42 crc kubenswrapper[4845]: I0202 11:23:42.712675 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:23:42 crc kubenswrapper[4845]: E0202 11:23:42.713430 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:23:56 crc kubenswrapper[4845]: I0202 11:23:56.713537 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:23:56 crc kubenswrapper[4845]: E0202 11:23:56.714487 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:24:09 crc kubenswrapper[4845]: I0202 11:24:09.721253 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:24:09 crc kubenswrapper[4845]: E0202 11:24:09.722217 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:24:21 crc kubenswrapper[4845]: I0202 11:24:21.712226 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:24:21 crc kubenswrapper[4845]: E0202 11:24:21.713054 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:24:33 crc kubenswrapper[4845]: I0202 11:24:33.712932 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:24:33 crc kubenswrapper[4845]: E0202 11:24:33.713854 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:24:47 crc kubenswrapper[4845]: I0202 11:24:47.713463 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:24:47 crc kubenswrapper[4845]: E0202 11:24:47.714537 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:25:02 crc kubenswrapper[4845]: I0202 11:25:02.712929 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:25:02 crc kubenswrapper[4845]: E0202 11:25:02.713780 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:25:15 crc kubenswrapper[4845]: I0202 11:25:15.715044 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:25:15 crc kubenswrapper[4845]: E0202 11:25:15.715988 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:25:26 crc kubenswrapper[4845]: I0202 11:25:26.713542 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:25:27 crc kubenswrapper[4845]: I0202 11:25:27.905865 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"d56234b2e16d7741ff4ae8dc5b5dbf48186b954479748d7114398eb29a827588"} Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.834191 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8cjkz"] Feb 02 11:25:59 crc kubenswrapper[4845]: E0202 11:25:59.835253 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767d7a70-a583-4d16-abd2-675171ae5138" containerName="extract-utilities" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.835267 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="767d7a70-a583-4d16-abd2-675171ae5138" containerName="extract-utilities" Feb 02 11:25:59 crc kubenswrapper[4845]: E0202 11:25:59.835283 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="extract-content" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.835289 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="extract-content" Feb 02 11:25:59 crc kubenswrapper[4845]: E0202 11:25:59.835322 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="extract-utilities" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.835332 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="extract-utilities" Feb 02 11:25:59 crc kubenswrapper[4845]: E0202 11:25:59.835352 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="registry-server" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.835359 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="registry-server" Feb 02 11:25:59 crc kubenswrapper[4845]: E0202 11:25:59.835372 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767d7a70-a583-4d16-abd2-675171ae5138" containerName="registry-server" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.835381 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="767d7a70-a583-4d16-abd2-675171ae5138" containerName="registry-server" Feb 02 11:25:59 crc kubenswrapper[4845]: E0202 11:25:59.835409 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767d7a70-a583-4d16-abd2-675171ae5138" containerName="extract-content" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.835415 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="767d7a70-a583-4d16-abd2-675171ae5138" containerName="extract-content" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.835621 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="767d7a70-a583-4d16-abd2-675171ae5138" containerName="registry-server" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.835645 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6147389-3624-4919-ba37-600b9c23a55e" containerName="registry-server" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.837268 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.871391 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cjkz"] Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.925939 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-utilities\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.926091 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-catalog-content\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:25:59 crc kubenswrapper[4845]: I0202 11:25:59.926134 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvjl\" (UniqueName: \"kubernetes.io/projected/24449f44-5470-48a2-b428-b5e44c302895-kube-api-access-4zvjl\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:00 crc kubenswrapper[4845]: I0202 11:26:00.029130 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-utilities\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:00 crc kubenswrapper[4845]: I0202 11:26:00.029390 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-catalog-content\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:00 crc kubenswrapper[4845]: I0202 11:26:00.029421 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvjl\" (UniqueName: \"kubernetes.io/projected/24449f44-5470-48a2-b428-b5e44c302895-kube-api-access-4zvjl\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:00 crc kubenswrapper[4845]: I0202 11:26:00.029779 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-utilities\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:00 crc kubenswrapper[4845]: I0202 11:26:00.029856 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-catalog-content\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:00 crc kubenswrapper[4845]: I0202 11:26:00.063005 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvjl\" (UniqueName: \"kubernetes.io/projected/24449f44-5470-48a2-b428-b5e44c302895-kube-api-access-4zvjl\") pod \"redhat-marketplace-8cjkz\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:00 crc kubenswrapper[4845]: I0202 11:26:00.163594 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:00 crc kubenswrapper[4845]: I0202 11:26:00.716486 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cjkz"] Feb 02 11:26:01 crc kubenswrapper[4845]: I0202 11:26:01.252844 4845 generic.go:334] "Generic (PLEG): container finished" podID="24449f44-5470-48a2-b428-b5e44c302895" containerID="ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32" exitCode=0 Feb 02 11:26:01 crc kubenswrapper[4845]: I0202 11:26:01.253117 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cjkz" event={"ID":"24449f44-5470-48a2-b428-b5e44c302895","Type":"ContainerDied","Data":"ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32"} Feb 02 11:26:01 crc kubenswrapper[4845]: I0202 11:26:01.254326 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cjkz" event={"ID":"24449f44-5470-48a2-b428-b5e44c302895","Type":"ContainerStarted","Data":"59c0560168a919aad63948ffa20e93fc4981745b7daf0aacf1636b1746ee6fbf"} Feb 02 11:26:01 crc kubenswrapper[4845]: I0202 11:26:01.256555 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:26:03 crc kubenswrapper[4845]: I0202 11:26:03.280565 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cjkz" event={"ID":"24449f44-5470-48a2-b428-b5e44c302895","Type":"ContainerStarted","Data":"eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95"} Feb 02 11:26:04 crc kubenswrapper[4845]: I0202 11:26:04.301273 4845 generic.go:334] "Generic (PLEG): container finished" podID="24449f44-5470-48a2-b428-b5e44c302895" containerID="eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95" exitCode=0 Feb 02 11:26:04 crc kubenswrapper[4845]: I0202 11:26:04.301522 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cjkz" event={"ID":"24449f44-5470-48a2-b428-b5e44c302895","Type":"ContainerDied","Data":"eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95"} Feb 02 11:26:05 crc kubenswrapper[4845]: I0202 11:26:05.319046 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cjkz" event={"ID":"24449f44-5470-48a2-b428-b5e44c302895","Type":"ContainerStarted","Data":"62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43"} Feb 02 11:26:05 crc kubenswrapper[4845]: I0202 11:26:05.352232 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8cjkz" podStartSLOduration=2.9102035109999997 podStartE2EDuration="6.352204489s" podCreationTimestamp="2026-02-02 11:25:59 +0000 UTC" firstStartedPulling="2026-02-02 11:26:01.256283398 +0000 UTC m=+3242.347684838" lastFinishedPulling="2026-02-02 11:26:04.698284366 +0000 UTC m=+3245.789685816" observedRunningTime="2026-02-02 11:26:05.346268028 +0000 UTC m=+3246.437669488" watchObservedRunningTime="2026-02-02 11:26:05.352204489 +0000 UTC m=+3246.443605939" Feb 02 11:26:10 crc kubenswrapper[4845]: I0202 11:26:10.164199 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:10 crc kubenswrapper[4845]: I0202 11:26:10.164764 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:10 crc kubenswrapper[4845]: I0202 11:26:10.220694 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:10 crc kubenswrapper[4845]: I0202 11:26:10.415487 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:10 crc kubenswrapper[4845]: I0202 11:26:10.465233 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cjkz"] Feb 02 11:26:12 crc kubenswrapper[4845]: I0202 11:26:12.394369 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8cjkz" podUID="24449f44-5470-48a2-b428-b5e44c302895" containerName="registry-server" containerID="cri-o://62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43" gracePeriod=2 Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.120159 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.284961 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-catalog-content\") pod \"24449f44-5470-48a2-b428-b5e44c302895\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.285337 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zvjl\" (UniqueName: \"kubernetes.io/projected/24449f44-5470-48a2-b428-b5e44c302895-kube-api-access-4zvjl\") pod \"24449f44-5470-48a2-b428-b5e44c302895\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.285414 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-utilities\") pod \"24449f44-5470-48a2-b428-b5e44c302895\" (UID: \"24449f44-5470-48a2-b428-b5e44c302895\") " Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.286321 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-utilities" (OuterVolumeSpecName: "utilities") pod "24449f44-5470-48a2-b428-b5e44c302895" (UID: "24449f44-5470-48a2-b428-b5e44c302895"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.292569 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24449f44-5470-48a2-b428-b5e44c302895-kube-api-access-4zvjl" (OuterVolumeSpecName: "kube-api-access-4zvjl") pod "24449f44-5470-48a2-b428-b5e44c302895" (UID: "24449f44-5470-48a2-b428-b5e44c302895"). InnerVolumeSpecName "kube-api-access-4zvjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.308611 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24449f44-5470-48a2-b428-b5e44c302895" (UID: "24449f44-5470-48a2-b428-b5e44c302895"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.388677 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.388985 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zvjl\" (UniqueName: \"kubernetes.io/projected/24449f44-5470-48a2-b428-b5e44c302895-kube-api-access-4zvjl\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.389101 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24449f44-5470-48a2-b428-b5e44c302895-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.406672 4845 generic.go:334] "Generic (PLEG): container finished" podID="24449f44-5470-48a2-b428-b5e44c302895" containerID="62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43" exitCode=0 Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.406717 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cjkz" event={"ID":"24449f44-5470-48a2-b428-b5e44c302895","Type":"ContainerDied","Data":"62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43"} Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.406744 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cjkz" event={"ID":"24449f44-5470-48a2-b428-b5e44c302895","Type":"ContainerDied","Data":"59c0560168a919aad63948ffa20e93fc4981745b7daf0aacf1636b1746ee6fbf"} Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.406759 4845 scope.go:117] "RemoveContainer" containerID="62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.406794 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cjkz" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.429355 4845 scope.go:117] "RemoveContainer" containerID="eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.444618 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cjkz"] Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.454875 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cjkz"] Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.469651 4845 scope.go:117] "RemoveContainer" containerID="ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.506204 4845 scope.go:117] "RemoveContainer" containerID="62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43" Feb 02 11:26:13 crc kubenswrapper[4845]: E0202 11:26:13.506624 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43\": container with ID starting with 62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43 not found: ID does not exist" containerID="62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.506661 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43"} err="failed to get container status \"62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43\": rpc error: code = NotFound desc = could not find container \"62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43\": container with ID starting with 62ebdb0a0cc61aa20bf0bf3d31cb2ecfc40e40dd77158fec5f66249eff6b6d43 not found: ID does not exist" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.506686 4845 scope.go:117] "RemoveContainer" containerID="eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95" Feb 02 11:26:13 crc kubenswrapper[4845]: E0202 11:26:13.506963 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95\": container with ID starting with eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95 not found: ID does not exist" containerID="eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.507043 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95"} err="failed to get container status \"eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95\": rpc error: code = NotFound desc = could not find container \"eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95\": container with ID starting with eb2701e64c29a4841dd02df677cae1738dcaaafb26704dfd153aaf007130ea95 not found: ID does not exist" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.507105 4845 scope.go:117] "RemoveContainer" containerID="ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32" Feb 02 11:26:13 crc kubenswrapper[4845]: E0202 11:26:13.507471 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32\": container with ID starting with ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32 not found: ID does not exist" containerID="ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.507566 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32"} err="failed to get container status \"ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32\": rpc error: code = NotFound desc = could not find container \"ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32\": container with ID starting with ebcbc35da1b150597e117b27e957c561c3b786327ecabc4dfacd2d1ecd84cf32 not found: ID does not exist" Feb 02 11:26:13 crc kubenswrapper[4845]: I0202 11:26:13.727315 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24449f44-5470-48a2-b428-b5e44c302895" path="/var/lib/kubelet/pods/24449f44-5470-48a2-b428-b5e44c302895/volumes" Feb 02 11:27:46 crc kubenswrapper[4845]: I0202 11:27:46.237751 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:27:46 crc kubenswrapper[4845]: I0202 11:27:46.239104 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:28:16 crc kubenswrapper[4845]: I0202 11:28:16.237429 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:28:16 crc kubenswrapper[4845]: I0202 11:28:16.237970 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:28:46 crc kubenswrapper[4845]: I0202 11:28:46.237694 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:28:46 crc kubenswrapper[4845]: I0202 11:28:46.238337 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:28:46 crc kubenswrapper[4845]: I0202 11:28:46.238389 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:28:46 crc kubenswrapper[4845]: I0202 11:28:46.239483 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d56234b2e16d7741ff4ae8dc5b5dbf48186b954479748d7114398eb29a827588"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:28:46 crc kubenswrapper[4845]: I0202 11:28:46.239539 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://d56234b2e16d7741ff4ae8dc5b5dbf48186b954479748d7114398eb29a827588" gracePeriod=600 Feb 02 11:28:47 crc kubenswrapper[4845]: I0202 11:28:47.316323 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="d56234b2e16d7741ff4ae8dc5b5dbf48186b954479748d7114398eb29a827588" exitCode=0 Feb 02 11:28:47 crc kubenswrapper[4845]: I0202 11:28:47.316417 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"d56234b2e16d7741ff4ae8dc5b5dbf48186b954479748d7114398eb29a827588"} Feb 02 11:28:47 crc kubenswrapper[4845]: I0202 11:28:47.316921 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b"} Feb 02 11:28:47 crc kubenswrapper[4845]: I0202 11:28:47.316951 4845 scope.go:117] "RemoveContainer" containerID="36183e2dfd6aaef1824bcd769ccb254bb6eb7d7237f3d8a43eef04603719266c" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.593831 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z6b55"] Feb 02 11:29:16 crc kubenswrapper[4845]: E0202 11:29:16.594951 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24449f44-5470-48a2-b428-b5e44c302895" containerName="extract-utilities" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.594970 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="24449f44-5470-48a2-b428-b5e44c302895" containerName="extract-utilities" Feb 02 11:29:16 crc kubenswrapper[4845]: E0202 11:29:16.594988 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24449f44-5470-48a2-b428-b5e44c302895" containerName="extract-content" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.594995 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="24449f44-5470-48a2-b428-b5e44c302895" containerName="extract-content" Feb 02 11:29:16 crc kubenswrapper[4845]: E0202 11:29:16.595038 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24449f44-5470-48a2-b428-b5e44c302895" containerName="registry-server" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.595046 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="24449f44-5470-48a2-b428-b5e44c302895" containerName="registry-server" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.595309 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="24449f44-5470-48a2-b428-b5e44c302895" containerName="registry-server" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.597497 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.603638 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-catalog-content\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.603764 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-utilities\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.603939 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svt7g\" (UniqueName: \"kubernetes.io/projected/61834d50-0e84-4db8-9777-640ca6b26d60-kube-api-access-svt7g\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.613758 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z6b55"] Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.705980 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-catalog-content\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.706060 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-utilities\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.706165 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svt7g\" (UniqueName: \"kubernetes.io/projected/61834d50-0e84-4db8-9777-640ca6b26d60-kube-api-access-svt7g\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.706491 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-catalog-content\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.706548 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-utilities\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.727787 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svt7g\" (UniqueName: \"kubernetes.io/projected/61834d50-0e84-4db8-9777-640ca6b26d60-kube-api-access-svt7g\") pod \"redhat-operators-z6b55\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:16 crc kubenswrapper[4845]: I0202 11:29:16.920840 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:17 crc kubenswrapper[4845]: I0202 11:29:17.523198 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z6b55"] Feb 02 11:29:17 crc kubenswrapper[4845]: I0202 11:29:17.647468 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6b55" event={"ID":"61834d50-0e84-4db8-9777-640ca6b26d60","Type":"ContainerStarted","Data":"154df5129783a3ccccf1aeba99b7d58286cbaa89ab345a93ba51357b26f227dd"} Feb 02 11:29:18 crc kubenswrapper[4845]: I0202 11:29:18.658497 4845 generic.go:334] "Generic (PLEG): container finished" podID="61834d50-0e84-4db8-9777-640ca6b26d60" containerID="f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5" exitCode=0 Feb 02 11:29:18 crc kubenswrapper[4845]: I0202 11:29:18.658701 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6b55" event={"ID":"61834d50-0e84-4db8-9777-640ca6b26d60","Type":"ContainerDied","Data":"f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5"} Feb 02 11:29:20 crc kubenswrapper[4845]: I0202 11:29:20.679217 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6b55" event={"ID":"61834d50-0e84-4db8-9777-640ca6b26d60","Type":"ContainerStarted","Data":"01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9"} Feb 02 11:29:27 crc kubenswrapper[4845]: I0202 11:29:27.748924 4845 generic.go:334] "Generic (PLEG): container finished" podID="61834d50-0e84-4db8-9777-640ca6b26d60" containerID="01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9" exitCode=0 Feb 02 11:29:27 crc kubenswrapper[4845]: I0202 11:29:27.749027 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6b55" event={"ID":"61834d50-0e84-4db8-9777-640ca6b26d60","Type":"ContainerDied","Data":"01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9"} Feb 02 11:29:28 crc kubenswrapper[4845]: I0202 11:29:28.764721 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6b55" event={"ID":"61834d50-0e84-4db8-9777-640ca6b26d60","Type":"ContainerStarted","Data":"ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50"} Feb 02 11:29:28 crc kubenswrapper[4845]: I0202 11:29:28.796911 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z6b55" podStartSLOduration=3.30269963 podStartE2EDuration="12.796867527s" podCreationTimestamp="2026-02-02 11:29:16 +0000 UTC" firstStartedPulling="2026-02-02 11:29:18.660446574 +0000 UTC m=+3439.751848024" lastFinishedPulling="2026-02-02 11:29:28.154614471 +0000 UTC m=+3449.246015921" observedRunningTime="2026-02-02 11:29:28.786102127 +0000 UTC m=+3449.877503597" watchObservedRunningTime="2026-02-02 11:29:28.796867527 +0000 UTC m=+3449.888268977" Feb 02 11:29:36 crc kubenswrapper[4845]: I0202 11:29:36.921422 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:36 crc kubenswrapper[4845]: I0202 11:29:36.921979 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:37 crc kubenswrapper[4845]: I0202 11:29:37.976627 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z6b55" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="registry-server" probeResult="failure" output=< Feb 02 11:29:37 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 11:29:37 crc kubenswrapper[4845]: > Feb 02 11:29:46 crc kubenswrapper[4845]: I0202 11:29:46.982898 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:47 crc kubenswrapper[4845]: I0202 11:29:47.055343 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:47 crc kubenswrapper[4845]: I0202 11:29:47.799784 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z6b55"] Feb 02 11:29:48 crc kubenswrapper[4845]: I0202 11:29:48.970075 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z6b55" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="registry-server" containerID="cri-o://ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50" gracePeriod=2 Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.632706 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.705452 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-utilities\") pod \"61834d50-0e84-4db8-9777-640ca6b26d60\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.705848 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svt7g\" (UniqueName: \"kubernetes.io/projected/61834d50-0e84-4db8-9777-640ca6b26d60-kube-api-access-svt7g\") pod \"61834d50-0e84-4db8-9777-640ca6b26d60\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.706124 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-catalog-content\") pod \"61834d50-0e84-4db8-9777-640ca6b26d60\" (UID: \"61834d50-0e84-4db8-9777-640ca6b26d60\") " Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.707787 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-utilities" (OuterVolumeSpecName: "utilities") pod "61834d50-0e84-4db8-9777-640ca6b26d60" (UID: "61834d50-0e84-4db8-9777-640ca6b26d60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.725173 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61834d50-0e84-4db8-9777-640ca6b26d60-kube-api-access-svt7g" (OuterVolumeSpecName: "kube-api-access-svt7g") pod "61834d50-0e84-4db8-9777-640ca6b26d60" (UID: "61834d50-0e84-4db8-9777-640ca6b26d60"). InnerVolumeSpecName "kube-api-access-svt7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.813788 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.813819 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svt7g\" (UniqueName: \"kubernetes.io/projected/61834d50-0e84-4db8-9777-640ca6b26d60-kube-api-access-svt7g\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.952230 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61834d50-0e84-4db8-9777-640ca6b26d60" (UID: "61834d50-0e84-4db8-9777-640ca6b26d60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.984834 4845 generic.go:334] "Generic (PLEG): container finished" podID="61834d50-0e84-4db8-9777-640ca6b26d60" containerID="ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50" exitCode=0 Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.984899 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6b55" event={"ID":"61834d50-0e84-4db8-9777-640ca6b26d60","Type":"ContainerDied","Data":"ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50"} Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.984931 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6b55" event={"ID":"61834d50-0e84-4db8-9777-640ca6b26d60","Type":"ContainerDied","Data":"154df5129783a3ccccf1aeba99b7d58286cbaa89ab345a93ba51357b26f227dd"} Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.984952 4845 scope.go:117] "RemoveContainer" containerID="ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50" Feb 02 11:29:49 crc kubenswrapper[4845]: I0202 11:29:49.985143 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6b55" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.019776 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61834d50-0e84-4db8-9777-640ca6b26d60-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.025965 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z6b55"] Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.030139 4845 scope.go:117] "RemoveContainer" containerID="01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.045487 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z6b55"] Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.081730 4845 scope.go:117] "RemoveContainer" containerID="f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.118988 4845 scope.go:117] "RemoveContainer" containerID="ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50" Feb 02 11:29:50 crc kubenswrapper[4845]: E0202 11:29:50.119474 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50\": container with ID starting with ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50 not found: ID does not exist" containerID="ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.119533 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50"} err="failed to get container status \"ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50\": rpc error: code = NotFound desc = could not find container \"ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50\": container with ID starting with ba6394952690016e2bc561e4ff8101ae0b164aeb25b7f3a80c0d6c9d6b83cd50 not found: ID does not exist" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.119570 4845 scope.go:117] "RemoveContainer" containerID="01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9" Feb 02 11:29:50 crc kubenswrapper[4845]: E0202 11:29:50.120292 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9\": container with ID starting with 01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9 not found: ID does not exist" containerID="01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.121131 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9"} err="failed to get container status \"01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9\": rpc error: code = NotFound desc = could not find container \"01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9\": container with ID starting with 01501528ca60a8de0900c04152d80c68fe32fef3a5fe3bc7c8f3cf138b87eac9 not found: ID does not exist" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.121177 4845 scope.go:117] "RemoveContainer" containerID="f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5" Feb 02 11:29:50 crc kubenswrapper[4845]: E0202 11:29:50.121507 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5\": container with ID starting with f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5 not found: ID does not exist" containerID="f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5" Feb 02 11:29:50 crc kubenswrapper[4845]: I0202 11:29:50.121533 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5"} err="failed to get container status \"f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5\": rpc error: code = NotFound desc = could not find container \"f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5\": container with ID starting with f4a0be7c873dd9dddb0910949bd2bbd98096db4082af9bf6827836b0050900b5 not found: ID does not exist" Feb 02 11:29:51 crc kubenswrapper[4845]: I0202 11:29:51.728101 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" path="/var/lib/kubelet/pods/61834d50-0e84-4db8-9777-640ca6b26d60/volumes" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.205786 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv"] Feb 02 11:30:00 crc kubenswrapper[4845]: E0202 11:30:00.209708 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="registry-server" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.209833 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="registry-server" Feb 02 11:30:00 crc kubenswrapper[4845]: E0202 11:30:00.209946 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="extract-utilities" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.210028 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="extract-utilities" Feb 02 11:30:00 crc kubenswrapper[4845]: E0202 11:30:00.210119 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="extract-content" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.210207 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="extract-content" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.210602 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="61834d50-0e84-4db8-9777-640ca6b26d60" containerName="registry-server" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.212533 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.222936 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.223356 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.227160 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv"] Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.358410 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a82306d-7b9e-4626-b188-0f5949bb75d5-secret-volume\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.358926 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx642\" (UniqueName: \"kubernetes.io/projected/3a82306d-7b9e-4626-b188-0f5949bb75d5-kube-api-access-xx642\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.359218 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a82306d-7b9e-4626-b188-0f5949bb75d5-config-volume\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.462502 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a82306d-7b9e-4626-b188-0f5949bb75d5-secret-volume\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.462602 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx642\" (UniqueName: \"kubernetes.io/projected/3a82306d-7b9e-4626-b188-0f5949bb75d5-kube-api-access-xx642\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.462671 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a82306d-7b9e-4626-b188-0f5949bb75d5-config-volume\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.464006 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a82306d-7b9e-4626-b188-0f5949bb75d5-config-volume\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.472132 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a82306d-7b9e-4626-b188-0f5949bb75d5-secret-volume\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.488111 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx642\" (UniqueName: \"kubernetes.io/projected/3a82306d-7b9e-4626-b188-0f5949bb75d5-kube-api-access-xx642\") pod \"collect-profiles-29500530-j6pnv\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:00 crc kubenswrapper[4845]: I0202 11:30:00.552867 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:01 crc kubenswrapper[4845]: I0202 11:30:01.150001 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv"] Feb 02 11:30:02 crc kubenswrapper[4845]: I0202 11:30:02.143511 4845 generic.go:334] "Generic (PLEG): container finished" podID="3a82306d-7b9e-4626-b188-0f5949bb75d5" containerID="edb6987761faadf9de2625a5fc2838557508106560e93a9005dcd6dd2f5f7b4f" exitCode=0 Feb 02 11:30:02 crc kubenswrapper[4845]: I0202 11:30:02.143641 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" event={"ID":"3a82306d-7b9e-4626-b188-0f5949bb75d5","Type":"ContainerDied","Data":"edb6987761faadf9de2625a5fc2838557508106560e93a9005dcd6dd2f5f7b4f"} Feb 02 11:30:02 crc kubenswrapper[4845]: I0202 11:30:02.144173 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" event={"ID":"3a82306d-7b9e-4626-b188-0f5949bb75d5","Type":"ContainerStarted","Data":"998b19e9c77f4187ea614be7e1eaa6c1e2f0406f6cc7acd15c0aa5604ea7c105"} Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.716675 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.790781 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a82306d-7b9e-4626-b188-0f5949bb75d5-secret-volume\") pod \"3a82306d-7b9e-4626-b188-0f5949bb75d5\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.791362 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a82306d-7b9e-4626-b188-0f5949bb75d5-config-volume\") pod \"3a82306d-7b9e-4626-b188-0f5949bb75d5\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.791467 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx642\" (UniqueName: \"kubernetes.io/projected/3a82306d-7b9e-4626-b188-0f5949bb75d5-kube-api-access-xx642\") pod \"3a82306d-7b9e-4626-b188-0f5949bb75d5\" (UID: \"3a82306d-7b9e-4626-b188-0f5949bb75d5\") " Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.792633 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a82306d-7b9e-4626-b188-0f5949bb75d5-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a82306d-7b9e-4626-b188-0f5949bb75d5" (UID: "3a82306d-7b9e-4626-b188-0f5949bb75d5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.793629 4845 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a82306d-7b9e-4626-b188-0f5949bb75d5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.799781 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a82306d-7b9e-4626-b188-0f5949bb75d5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a82306d-7b9e-4626-b188-0f5949bb75d5" (UID: "3a82306d-7b9e-4626-b188-0f5949bb75d5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.804758 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a82306d-7b9e-4626-b188-0f5949bb75d5-kube-api-access-xx642" (OuterVolumeSpecName: "kube-api-access-xx642") pod "3a82306d-7b9e-4626-b188-0f5949bb75d5" (UID: "3a82306d-7b9e-4626-b188-0f5949bb75d5"). InnerVolumeSpecName "kube-api-access-xx642". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.897124 4845 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a82306d-7b9e-4626-b188-0f5949bb75d5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:03 crc kubenswrapper[4845]: I0202 11:30:03.897650 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx642\" (UniqueName: \"kubernetes.io/projected/3a82306d-7b9e-4626-b188-0f5949bb75d5-kube-api-access-xx642\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:04 crc kubenswrapper[4845]: I0202 11:30:04.179933 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" event={"ID":"3a82306d-7b9e-4626-b188-0f5949bb75d5","Type":"ContainerDied","Data":"998b19e9c77f4187ea614be7e1eaa6c1e2f0406f6cc7acd15c0aa5604ea7c105"} Feb 02 11:30:04 crc kubenswrapper[4845]: I0202 11:30:04.180004 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-j6pnv" Feb 02 11:30:04 crc kubenswrapper[4845]: I0202 11:30:04.180019 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998b19e9c77f4187ea614be7e1eaa6c1e2f0406f6cc7acd15c0aa5604ea7c105" Feb 02 11:30:04 crc kubenswrapper[4845]: I0202 11:30:04.820421 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq"] Feb 02 11:30:04 crc kubenswrapper[4845]: I0202 11:30:04.833114 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-wl2wq"] Feb 02 11:30:05 crc kubenswrapper[4845]: I0202 11:30:05.732794 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af5c06e-cf07-4f85-97e9-6b93ec03281c" path="/var/lib/kubelet/pods/6af5c06e-cf07-4f85-97e9-6b93ec03281c/volumes" Feb 02 11:30:11 crc kubenswrapper[4845]: I0202 11:30:11.261849 4845 scope.go:117] "RemoveContainer" containerID="a18e0c7c2dae09d3f5627d9a9acf7414e19ce7a97f56c94017a2bc4812f89130" Feb 02 11:30:46 crc kubenswrapper[4845]: I0202 11:30:46.237903 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:30:46 crc kubenswrapper[4845]: I0202 11:30:46.238495 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.212390 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tz54l"] Feb 02 11:31:07 crc kubenswrapper[4845]: E0202 11:31:07.215625 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a82306d-7b9e-4626-b188-0f5949bb75d5" containerName="collect-profiles" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.215648 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a82306d-7b9e-4626-b188-0f5949bb75d5" containerName="collect-profiles" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.215989 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a82306d-7b9e-4626-b188-0f5949bb75d5" containerName="collect-profiles" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.219223 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.238025 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tz54l"] Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.330342 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-utilities\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.330437 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn65q\" (UniqueName: \"kubernetes.io/projected/ec8ab263-5831-40ac-aa87-f187c2b59314-kube-api-access-fn65q\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.331665 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-catalog-content\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.413502 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bw7j2"] Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.416232 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.430336 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bw7j2"] Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.435736 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-catalog-content\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.435788 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-catalog-content\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.436407 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-catalog-content\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.436418 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-utilities\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.436454 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-utilities\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.436585 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn65q\" (UniqueName: \"kubernetes.io/projected/ec8ab263-5831-40ac-aa87-f187c2b59314-kube-api-access-fn65q\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.436710 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz2rw\" (UniqueName: \"kubernetes.io/projected/84c47002-db31-44bf-8691-c1a21ba5a78e-kube-api-access-hz2rw\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.437294 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-utilities\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.467489 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn65q\" (UniqueName: \"kubernetes.io/projected/ec8ab263-5831-40ac-aa87-f187c2b59314-kube-api-access-fn65q\") pod \"certified-operators-tz54l\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.541119 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-catalog-content\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.541413 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-utilities\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.541487 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz2rw\" (UniqueName: \"kubernetes.io/projected/84c47002-db31-44bf-8691-c1a21ba5a78e-kube-api-access-hz2rw\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.541695 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-catalog-content\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.541979 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-utilities\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.547668 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.577813 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz2rw\" (UniqueName: \"kubernetes.io/projected/84c47002-db31-44bf-8691-c1a21ba5a78e-kube-api-access-hz2rw\") pod \"community-operators-bw7j2\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:07 crc kubenswrapper[4845]: I0202 11:31:07.749596 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.223990 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tz54l"] Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.598743 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bw7j2"] Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.989624 4845 generic.go:334] "Generic (PLEG): container finished" podID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerID="acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba" exitCode=0 Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.989681 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz54l" event={"ID":"ec8ab263-5831-40ac-aa87-f187c2b59314","Type":"ContainerDied","Data":"acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba"} Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.989958 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz54l" event={"ID":"ec8ab263-5831-40ac-aa87-f187c2b59314","Type":"ContainerStarted","Data":"cd11d8ef1250d6350601d85867207b1fddc0dbb1f1a948ede5f7446e720e4011"} Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.991745 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.993095 4845 generic.go:334] "Generic (PLEG): container finished" podID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerID="35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436" exitCode=0 Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.993132 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw7j2" event={"ID":"84c47002-db31-44bf-8691-c1a21ba5a78e","Type":"ContainerDied","Data":"35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436"} Feb 02 11:31:08 crc kubenswrapper[4845]: I0202 11:31:08.993154 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw7j2" event={"ID":"84c47002-db31-44bf-8691-c1a21ba5a78e","Type":"ContainerStarted","Data":"1ac47ff9ae71bd0e30bcc2ba96930aff3367febb4799071e018c1f23e5d1db73"} Feb 02 11:31:12 crc kubenswrapper[4845]: I0202 11:31:12.036647 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz54l" event={"ID":"ec8ab263-5831-40ac-aa87-f187c2b59314","Type":"ContainerStarted","Data":"0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08"} Feb 02 11:31:12 crc kubenswrapper[4845]: I0202 11:31:12.045655 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw7j2" event={"ID":"84c47002-db31-44bf-8691-c1a21ba5a78e","Type":"ContainerStarted","Data":"cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565"} Feb 02 11:31:13 crc kubenswrapper[4845]: I0202 11:31:13.062427 4845 generic.go:334] "Generic (PLEG): container finished" podID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerID="cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565" exitCode=0 Feb 02 11:31:13 crc kubenswrapper[4845]: I0202 11:31:13.062538 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw7j2" event={"ID":"84c47002-db31-44bf-8691-c1a21ba5a78e","Type":"ContainerDied","Data":"cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565"} Feb 02 11:31:14 crc kubenswrapper[4845]: I0202 11:31:14.077049 4845 generic.go:334] "Generic (PLEG): container finished" podID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerID="0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08" exitCode=0 Feb 02 11:31:14 crc kubenswrapper[4845]: I0202 11:31:14.077114 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz54l" event={"ID":"ec8ab263-5831-40ac-aa87-f187c2b59314","Type":"ContainerDied","Data":"0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08"} Feb 02 11:31:15 crc kubenswrapper[4845]: I0202 11:31:15.094165 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz54l" event={"ID":"ec8ab263-5831-40ac-aa87-f187c2b59314","Type":"ContainerStarted","Data":"8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d"} Feb 02 11:31:15 crc kubenswrapper[4845]: I0202 11:31:15.097914 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw7j2" event={"ID":"84c47002-db31-44bf-8691-c1a21ba5a78e","Type":"ContainerStarted","Data":"5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022"} Feb 02 11:31:15 crc kubenswrapper[4845]: I0202 11:31:15.115650 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tz54l" podStartSLOduration=2.62289833 podStartE2EDuration="8.11563035s" podCreationTimestamp="2026-02-02 11:31:07 +0000 UTC" firstStartedPulling="2026-02-02 11:31:08.991491825 +0000 UTC m=+3550.082893275" lastFinishedPulling="2026-02-02 11:31:14.484223835 +0000 UTC m=+3555.575625295" observedRunningTime="2026-02-02 11:31:15.11459131 +0000 UTC m=+3556.205992770" watchObservedRunningTime="2026-02-02 11:31:15.11563035 +0000 UTC m=+3556.207031790" Feb 02 11:31:15 crc kubenswrapper[4845]: I0202 11:31:15.139371 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bw7j2" podStartSLOduration=2.991077367 podStartE2EDuration="8.13934796s" podCreationTimestamp="2026-02-02 11:31:07 +0000 UTC" firstStartedPulling="2026-02-02 11:31:08.995003125 +0000 UTC m=+3550.086404575" lastFinishedPulling="2026-02-02 11:31:14.143273718 +0000 UTC m=+3555.234675168" observedRunningTime="2026-02-02 11:31:15.134075499 +0000 UTC m=+3556.225476959" watchObservedRunningTime="2026-02-02 11:31:15.13934796 +0000 UTC m=+3556.230749420" Feb 02 11:31:16 crc kubenswrapper[4845]: I0202 11:31:16.238429 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:31:16 crc kubenswrapper[4845]: I0202 11:31:16.239365 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:31:17 crc kubenswrapper[4845]: I0202 11:31:17.548150 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:17 crc kubenswrapper[4845]: I0202 11:31:17.548433 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:17 crc kubenswrapper[4845]: I0202 11:31:17.609938 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:17 crc kubenswrapper[4845]: I0202 11:31:17.750339 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:17 crc kubenswrapper[4845]: I0202 11:31:17.750445 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:17 crc kubenswrapper[4845]: I0202 11:31:17.806994 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:27 crc kubenswrapper[4845]: I0202 11:31:27.610197 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:27 crc kubenswrapper[4845]: I0202 11:31:27.813537 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:32 crc kubenswrapper[4845]: I0202 11:31:32.197159 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tz54l"] Feb 02 11:31:32 crc kubenswrapper[4845]: I0202 11:31:32.198084 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tz54l" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerName="registry-server" containerID="cri-o://8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d" gracePeriod=2 Feb 02 11:31:32 crc kubenswrapper[4845]: I0202 11:31:32.594287 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bw7j2"] Feb 02 11:31:32 crc kubenswrapper[4845]: I0202 11:31:32.594798 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bw7j2" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerName="registry-server" containerID="cri-o://5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022" gracePeriod=2 Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.477461 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.484803 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.555872 4845 generic.go:334] "Generic (PLEG): container finished" podID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerID="8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d" exitCode=0 Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.555961 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz54l" event={"ID":"ec8ab263-5831-40ac-aa87-f187c2b59314","Type":"ContainerDied","Data":"8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d"} Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.556027 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz54l" event={"ID":"ec8ab263-5831-40ac-aa87-f187c2b59314","Type":"ContainerDied","Data":"cd11d8ef1250d6350601d85867207b1fddc0dbb1f1a948ede5f7446e720e4011"} Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.556050 4845 scope.go:117] "RemoveContainer" containerID="8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.556281 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz54l" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.560169 4845 generic.go:334] "Generic (PLEG): container finished" podID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerID="5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022" exitCode=0 Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.560247 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw7j2" event={"ID":"84c47002-db31-44bf-8691-c1a21ba5a78e","Type":"ContainerDied","Data":"5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022"} Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.560282 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw7j2" event={"ID":"84c47002-db31-44bf-8691-c1a21ba5a78e","Type":"ContainerDied","Data":"1ac47ff9ae71bd0e30bcc2ba96930aff3367febb4799071e018c1f23e5d1db73"} Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.560304 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw7j2" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.580963 4845 scope.go:117] "RemoveContainer" containerID="0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.610677 4845 scope.go:117] "RemoveContainer" containerID="acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.624650 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz2rw\" (UniqueName: \"kubernetes.io/projected/84c47002-db31-44bf-8691-c1a21ba5a78e-kube-api-access-hz2rw\") pod \"84c47002-db31-44bf-8691-c1a21ba5a78e\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.624781 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-utilities\") pod \"ec8ab263-5831-40ac-aa87-f187c2b59314\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.624825 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-catalog-content\") pod \"ec8ab263-5831-40ac-aa87-f187c2b59314\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.624993 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-utilities\") pod \"84c47002-db31-44bf-8691-c1a21ba5a78e\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.625227 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-catalog-content\") pod \"84c47002-db31-44bf-8691-c1a21ba5a78e\" (UID: \"84c47002-db31-44bf-8691-c1a21ba5a78e\") " Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.625289 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn65q\" (UniqueName: \"kubernetes.io/projected/ec8ab263-5831-40ac-aa87-f187c2b59314-kube-api-access-fn65q\") pod \"ec8ab263-5831-40ac-aa87-f187c2b59314\" (UID: \"ec8ab263-5831-40ac-aa87-f187c2b59314\") " Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.626028 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-utilities" (OuterVolumeSpecName: "utilities") pod "ec8ab263-5831-40ac-aa87-f187c2b59314" (UID: "ec8ab263-5831-40ac-aa87-f187c2b59314"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.626142 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-utilities" (OuterVolumeSpecName: "utilities") pod "84c47002-db31-44bf-8691-c1a21ba5a78e" (UID: "84c47002-db31-44bf-8691-c1a21ba5a78e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.627227 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.627258 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.632061 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8ab263-5831-40ac-aa87-f187c2b59314-kube-api-access-fn65q" (OuterVolumeSpecName: "kube-api-access-fn65q") pod "ec8ab263-5831-40ac-aa87-f187c2b59314" (UID: "ec8ab263-5831-40ac-aa87-f187c2b59314"). InnerVolumeSpecName "kube-api-access-fn65q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.632446 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c47002-db31-44bf-8691-c1a21ba5a78e-kube-api-access-hz2rw" (OuterVolumeSpecName: "kube-api-access-hz2rw") pod "84c47002-db31-44bf-8691-c1a21ba5a78e" (UID: "84c47002-db31-44bf-8691-c1a21ba5a78e"). InnerVolumeSpecName "kube-api-access-hz2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.687428 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec8ab263-5831-40ac-aa87-f187c2b59314" (UID: "ec8ab263-5831-40ac-aa87-f187c2b59314"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.690243 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84c47002-db31-44bf-8691-c1a21ba5a78e" (UID: "84c47002-db31-44bf-8691-c1a21ba5a78e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.727066 4845 scope.go:117] "RemoveContainer" containerID="8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d" Feb 02 11:31:34 crc kubenswrapper[4845]: E0202 11:31:34.727762 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d\": container with ID starting with 8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d not found: ID does not exist" containerID="8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.727869 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d"} err="failed to get container status \"8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d\": rpc error: code = NotFound desc = could not find container \"8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d\": container with ID starting with 8669c08da5bdec9a653b0545fb0c894b2b341be63906f2d7c4a708a4958d0c7d not found: ID does not exist" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.728017 4845 scope.go:117] "RemoveContainer" containerID="0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08" Feb 02 11:31:34 crc kubenswrapper[4845]: E0202 11:31:34.728524 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08\": container with ID starting with 0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08 not found: ID does not exist" containerID="0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.728672 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08"} err="failed to get container status \"0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08\": rpc error: code = NotFound desc = could not find container \"0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08\": container with ID starting with 0742c7daa74690d966519991d686ec68220a1124cd496cb9a89fa968d5341f08 not found: ID does not exist" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.728742 4845 scope.go:117] "RemoveContainer" containerID="acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba" Feb 02 11:31:34 crc kubenswrapper[4845]: E0202 11:31:34.729208 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba\": container with ID starting with acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba not found: ID does not exist" containerID="acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.729325 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba"} err="failed to get container status \"acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba\": rpc error: code = NotFound desc = could not find container \"acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba\": container with ID starting with acecf1d7f72e342f32e9f7baa1fdfed8efa08f9a624e182d96067aaee43590ba not found: ID does not exist" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.729436 4845 scope.go:117] "RemoveContainer" containerID="5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.729551 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz2rw\" (UniqueName: \"kubernetes.io/projected/84c47002-db31-44bf-8691-c1a21ba5a78e-kube-api-access-hz2rw\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.729630 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ab263-5831-40ac-aa87-f187c2b59314-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.729645 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c47002-db31-44bf-8691-c1a21ba5a78e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.729658 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn65q\" (UniqueName: \"kubernetes.io/projected/ec8ab263-5831-40ac-aa87-f187c2b59314-kube-api-access-fn65q\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.800222 4845 scope.go:117] "RemoveContainer" containerID="cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.829942 4845 scope.go:117] "RemoveContainer" containerID="35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.915030 4845 scope.go:117] "RemoveContainer" containerID="5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022" Feb 02 11:31:34 crc kubenswrapper[4845]: E0202 11:31:34.916780 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022\": container with ID starting with 5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022 not found: ID does not exist" containerID="5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.916834 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022"} err="failed to get container status \"5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022\": rpc error: code = NotFound desc = could not find container \"5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022\": container with ID starting with 5e76543513530e1ce52e8f23a6b0e52de4aa0c7016831a0149aecf1b2f9d7022 not found: ID does not exist" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.916872 4845 scope.go:117] "RemoveContainer" containerID="cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565" Feb 02 11:31:34 crc kubenswrapper[4845]: E0202 11:31:34.917383 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565\": container with ID starting with cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565 not found: ID does not exist" containerID="cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.917471 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565"} err="failed to get container status \"cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565\": rpc error: code = NotFound desc = could not find container \"cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565\": container with ID starting with cf97ada089c899c976b0f969e8b4a9c10c10d10f5c77c914b24af209ddf1a565 not found: ID does not exist" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.917544 4845 scope.go:117] "RemoveContainer" containerID="35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436" Feb 02 11:31:34 crc kubenswrapper[4845]: E0202 11:31:34.917971 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436\": container with ID starting with 35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436 not found: ID does not exist" containerID="35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.918011 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436"} err="failed to get container status \"35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436\": rpc error: code = NotFound desc = could not find container \"35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436\": container with ID starting with 35fc1714ec66a391ab718f3d9c895515ba095df3d443d24b7256dfb071c8d436 not found: ID does not exist" Feb 02 11:31:34 crc kubenswrapper[4845]: I0202 11:31:34.996168 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tz54l"] Feb 02 11:31:35 crc kubenswrapper[4845]: I0202 11:31:35.009331 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tz54l"] Feb 02 11:31:35 crc kubenswrapper[4845]: I0202 11:31:35.020120 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bw7j2"] Feb 02 11:31:35 crc kubenswrapper[4845]: I0202 11:31:35.033707 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bw7j2"] Feb 02 11:31:35 crc kubenswrapper[4845]: I0202 11:31:35.727840 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" path="/var/lib/kubelet/pods/84c47002-db31-44bf-8691-c1a21ba5a78e/volumes" Feb 02 11:31:35 crc kubenswrapper[4845]: I0202 11:31:35.729242 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" path="/var/lib/kubelet/pods/ec8ab263-5831-40ac-aa87-f187c2b59314/volumes" Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.237475 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.238154 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.238209 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.239107 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.239179 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" gracePeriod=600 Feb 02 11:31:46 crc kubenswrapper[4845]: E0202 11:31:46.361270 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.681212 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" exitCode=0 Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.681250 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b"} Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.681289 4845 scope.go:117] "RemoveContainer" containerID="d56234b2e16d7741ff4ae8dc5b5dbf48186b954479748d7114398eb29a827588" Feb 02 11:31:46 crc kubenswrapper[4845]: I0202 11:31:46.683701 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:31:46 crc kubenswrapper[4845]: E0202 11:31:46.684517 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:32:01 crc kubenswrapper[4845]: I0202 11:32:01.712956 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:32:01 crc kubenswrapper[4845]: E0202 11:32:01.713712 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:32:16 crc kubenswrapper[4845]: I0202 11:32:16.712930 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:32:16 crc kubenswrapper[4845]: E0202 11:32:16.713696 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:32:29 crc kubenswrapper[4845]: I0202 11:32:29.723821 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:32:29 crc kubenswrapper[4845]: E0202 11:32:29.724654 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:32:40 crc kubenswrapper[4845]: I0202 11:32:40.713031 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:32:40 crc kubenswrapper[4845]: E0202 11:32:40.715119 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:32:55 crc kubenswrapper[4845]: I0202 11:32:55.713548 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:32:55 crc kubenswrapper[4845]: E0202 11:32:55.714406 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:33:09 crc kubenswrapper[4845]: I0202 11:33:09.713724 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:33:09 crc kubenswrapper[4845]: E0202 11:33:09.714587 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:33:24 crc kubenswrapper[4845]: I0202 11:33:24.713779 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:33:24 crc kubenswrapper[4845]: E0202 11:33:24.715072 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:33:38 crc kubenswrapper[4845]: I0202 11:33:38.713191 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:33:38 crc kubenswrapper[4845]: E0202 11:33:38.714076 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:33:50 crc kubenswrapper[4845]: I0202 11:33:50.714425 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:33:50 crc kubenswrapper[4845]: E0202 11:33:50.715225 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:34:04 crc kubenswrapper[4845]: I0202 11:34:04.713415 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:34:04 crc kubenswrapper[4845]: E0202 11:34:04.715762 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:34:17 crc kubenswrapper[4845]: I0202 11:34:17.714071 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:34:17 crc kubenswrapper[4845]: E0202 11:34:17.715611 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:34:31 crc kubenswrapper[4845]: I0202 11:34:31.713476 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:34:31 crc kubenswrapper[4845]: E0202 11:34:31.714553 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:34:43 crc kubenswrapper[4845]: I0202 11:34:43.713805 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:34:43 crc kubenswrapper[4845]: E0202 11:34:43.714503 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:34:56 crc kubenswrapper[4845]: I0202 11:34:56.713548 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:34:56 crc kubenswrapper[4845]: E0202 11:34:56.714776 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:35:07 crc kubenswrapper[4845]: I0202 11:35:07.714011 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:35:07 crc kubenswrapper[4845]: E0202 11:35:07.714758 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:35:21 crc kubenswrapper[4845]: I0202 11:35:21.713965 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:35:21 crc kubenswrapper[4845]: E0202 11:35:21.715820 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:35:32 crc kubenswrapper[4845]: I0202 11:35:32.713111 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:35:32 crc kubenswrapper[4845]: E0202 11:35:32.714156 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:35:45 crc kubenswrapper[4845]: I0202 11:35:45.714450 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:35:45 crc kubenswrapper[4845]: E0202 11:35:45.715409 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:35:57 crc kubenswrapper[4845]: I0202 11:35:57.713872 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:35:57 crc kubenswrapper[4845]: E0202 11:35:57.714837 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:36:08 crc kubenswrapper[4845]: I0202 11:36:08.713374 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:36:08 crc kubenswrapper[4845]: E0202 11:36:08.714419 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:36:23 crc kubenswrapper[4845]: I0202 11:36:23.714178 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:36:23 crc kubenswrapper[4845]: E0202 11:36:23.715124 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:36:36 crc kubenswrapper[4845]: I0202 11:36:36.713631 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:36:36 crc kubenswrapper[4845]: E0202 11:36:36.714656 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:36:50 crc kubenswrapper[4845]: I0202 11:36:50.712678 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:36:52 crc kubenswrapper[4845]: I0202 11:36:52.411936 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"800a02972d6ec813ad234012277534be95b3106d04c712ded96644b0403433e0"} Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.358585 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sh64l"] Feb 02 11:38:58 crc kubenswrapper[4845]: E0202 11:38:58.360151 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerName="extract-utilities" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.360172 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerName="extract-utilities" Feb 02 11:38:58 crc kubenswrapper[4845]: E0202 11:38:58.360190 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerName="extract-utilities" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.360197 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerName="extract-utilities" Feb 02 11:38:58 crc kubenswrapper[4845]: E0202 11:38:58.360219 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerName="registry-server" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.360229 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerName="registry-server" Feb 02 11:38:58 crc kubenswrapper[4845]: E0202 11:38:58.360247 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerName="registry-server" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.360254 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerName="registry-server" Feb 02 11:38:58 crc kubenswrapper[4845]: E0202 11:38:58.360281 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerName="extract-content" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.360288 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerName="extract-content" Feb 02 11:38:58 crc kubenswrapper[4845]: E0202 11:38:58.360317 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerName="extract-content" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.360324 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerName="extract-content" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.360568 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8ab263-5831-40ac-aa87-f187c2b59314" containerName="registry-server" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.360586 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c47002-db31-44bf-8691-c1a21ba5a78e" containerName="registry-server" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.363120 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.379944 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh64l"] Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.489363 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-utilities\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.489954 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-catalog-content\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.490099 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8fm\" (UniqueName: \"kubernetes.io/projected/67feb8df-07b7-4752-953d-fa9c66d6504f-kube-api-access-cc8fm\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.592040 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-catalog-content\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.592179 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8fm\" (UniqueName: \"kubernetes.io/projected/67feb8df-07b7-4752-953d-fa9c66d6504f-kube-api-access-cc8fm\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.592209 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-utilities\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.592631 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-utilities\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.592630 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-catalog-content\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.628024 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8fm\" (UniqueName: \"kubernetes.io/projected/67feb8df-07b7-4752-953d-fa9c66d6504f-kube-api-access-cc8fm\") pod \"redhat-marketplace-sh64l\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:58 crc kubenswrapper[4845]: I0202 11:38:58.686455 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:38:59 crc kubenswrapper[4845]: I0202 11:38:59.236472 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh64l"] Feb 02 11:38:59 crc kubenswrapper[4845]: W0202 11:38:59.238503 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67feb8df_07b7_4752_953d_fa9c66d6504f.slice/crio-b51a14550ffae387b6e6b865beb3bd9e9d799c3dccf7ac00df3b329b73fe8a28 WatchSource:0}: Error finding container b51a14550ffae387b6e6b865beb3bd9e9d799c3dccf7ac00df3b329b73fe8a28: Status 404 returned error can't find the container with id b51a14550ffae387b6e6b865beb3bd9e9d799c3dccf7ac00df3b329b73fe8a28 Feb 02 11:38:59 crc kubenswrapper[4845]: I0202 11:38:59.801614 4845 generic.go:334] "Generic (PLEG): container finished" podID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerID="75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b" exitCode=0 Feb 02 11:38:59 crc kubenswrapper[4845]: I0202 11:38:59.801786 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh64l" event={"ID":"67feb8df-07b7-4752-953d-fa9c66d6504f","Type":"ContainerDied","Data":"75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b"} Feb 02 11:38:59 crc kubenswrapper[4845]: I0202 11:38:59.802218 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh64l" event={"ID":"67feb8df-07b7-4752-953d-fa9c66d6504f","Type":"ContainerStarted","Data":"b51a14550ffae387b6e6b865beb3bd9e9d799c3dccf7ac00df3b329b73fe8a28"} Feb 02 11:38:59 crc kubenswrapper[4845]: I0202 11:38:59.807282 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:39:00 crc kubenswrapper[4845]: I0202 11:39:00.817774 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh64l" event={"ID":"67feb8df-07b7-4752-953d-fa9c66d6504f","Type":"ContainerStarted","Data":"cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f"} Feb 02 11:39:02 crc kubenswrapper[4845]: I0202 11:39:02.845485 4845 generic.go:334] "Generic (PLEG): container finished" podID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerID="cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f" exitCode=0 Feb 02 11:39:02 crc kubenswrapper[4845]: I0202 11:39:02.845573 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh64l" event={"ID":"67feb8df-07b7-4752-953d-fa9c66d6504f","Type":"ContainerDied","Data":"cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f"} Feb 02 11:39:03 crc kubenswrapper[4845]: I0202 11:39:03.863566 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh64l" event={"ID":"67feb8df-07b7-4752-953d-fa9c66d6504f","Type":"ContainerStarted","Data":"073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5"} Feb 02 11:39:03 crc kubenswrapper[4845]: I0202 11:39:03.889055 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sh64l" podStartSLOduration=2.403569813 podStartE2EDuration="5.889011791s" podCreationTimestamp="2026-02-02 11:38:58 +0000 UTC" firstStartedPulling="2026-02-02 11:38:59.806937113 +0000 UTC m=+4020.898338563" lastFinishedPulling="2026-02-02 11:39:03.292379091 +0000 UTC m=+4024.383780541" observedRunningTime="2026-02-02 11:39:03.888049813 +0000 UTC m=+4024.979451283" watchObservedRunningTime="2026-02-02 11:39:03.889011791 +0000 UTC m=+4024.980413241" Feb 02 11:39:08 crc kubenswrapper[4845]: I0202 11:39:08.686820 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:39:08 crc kubenswrapper[4845]: I0202 11:39:08.687409 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:39:09 crc kubenswrapper[4845]: I0202 11:39:09.740484 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sh64l" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="registry-server" probeResult="failure" output=< Feb 02 11:39:09 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 11:39:09 crc kubenswrapper[4845]: > Feb 02 11:39:16 crc kubenswrapper[4845]: I0202 11:39:16.237845 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:39:16 crc kubenswrapper[4845]: I0202 11:39:16.238863 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:39:18 crc kubenswrapper[4845]: I0202 11:39:18.734200 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:39:18 crc kubenswrapper[4845]: I0202 11:39:18.795372 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:39:18 crc kubenswrapper[4845]: I0202 11:39:18.971971 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh64l"] Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.067086 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sh64l" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="registry-server" containerID="cri-o://073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5" gracePeriod=2 Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.749841 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.878678 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc8fm\" (UniqueName: \"kubernetes.io/projected/67feb8df-07b7-4752-953d-fa9c66d6504f-kube-api-access-cc8fm\") pod \"67feb8df-07b7-4752-953d-fa9c66d6504f\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.878995 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-catalog-content\") pod \"67feb8df-07b7-4752-953d-fa9c66d6504f\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.879130 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-utilities\") pod \"67feb8df-07b7-4752-953d-fa9c66d6504f\" (UID: \"67feb8df-07b7-4752-953d-fa9c66d6504f\") " Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.880355 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-utilities" (OuterVolumeSpecName: "utilities") pod "67feb8df-07b7-4752-953d-fa9c66d6504f" (UID: "67feb8df-07b7-4752-953d-fa9c66d6504f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.882470 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.889553 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67feb8df-07b7-4752-953d-fa9c66d6504f-kube-api-access-cc8fm" (OuterVolumeSpecName: "kube-api-access-cc8fm") pod "67feb8df-07b7-4752-953d-fa9c66d6504f" (UID: "67feb8df-07b7-4752-953d-fa9c66d6504f"). InnerVolumeSpecName "kube-api-access-cc8fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.910829 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67feb8df-07b7-4752-953d-fa9c66d6504f" (UID: "67feb8df-07b7-4752-953d-fa9c66d6504f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.984502 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc8fm\" (UniqueName: \"kubernetes.io/projected/67feb8df-07b7-4752-953d-fa9c66d6504f-kube-api-access-cc8fm\") on node \"crc\" DevicePath \"\"" Feb 02 11:39:20 crc kubenswrapper[4845]: I0202 11:39:20.984581 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67feb8df-07b7-4752-953d-fa9c66d6504f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.083963 4845 generic.go:334] "Generic (PLEG): container finished" podID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerID="073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5" exitCode=0 Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.083995 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sh64l" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.084044 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh64l" event={"ID":"67feb8df-07b7-4752-953d-fa9c66d6504f","Type":"ContainerDied","Data":"073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5"} Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.085107 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh64l" event={"ID":"67feb8df-07b7-4752-953d-fa9c66d6504f","Type":"ContainerDied","Data":"b51a14550ffae387b6e6b865beb3bd9e9d799c3dccf7ac00df3b329b73fe8a28"} Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.085157 4845 scope.go:117] "RemoveContainer" containerID="073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.109847 4845 scope.go:117] "RemoveContainer" containerID="cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.138004 4845 scope.go:117] "RemoveContainer" containerID="75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.144422 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh64l"] Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.161172 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh64l"] Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.194427 4845 scope.go:117] "RemoveContainer" containerID="073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5" Feb 02 11:39:21 crc kubenswrapper[4845]: E0202 11:39:21.195025 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5\": container with ID starting with 073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5 not found: ID does not exist" containerID="073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.195119 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5"} err="failed to get container status \"073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5\": rpc error: code = NotFound desc = could not find container \"073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5\": container with ID starting with 073226b6e0385de076f5a1c9fa4992d18cbc2ac8701878ac3a64f280f9918ac5 not found: ID does not exist" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.195175 4845 scope.go:117] "RemoveContainer" containerID="cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f" Feb 02 11:39:21 crc kubenswrapper[4845]: E0202 11:39:21.195578 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f\": container with ID starting with cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f not found: ID does not exist" containerID="cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.195616 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f"} err="failed to get container status \"cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f\": rpc error: code = NotFound desc = could not find container \"cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f\": container with ID starting with cf3e00d0c3fb340efd3b1bce07e9a00e299f42f63131def5f28c9ed18183f99f not found: ID does not exist" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.195637 4845 scope.go:117] "RemoveContainer" containerID="75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b" Feb 02 11:39:21 crc kubenswrapper[4845]: E0202 11:39:21.196000 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b\": container with ID starting with 75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b not found: ID does not exist" containerID="75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.196034 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b"} err="failed to get container status \"75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b\": rpc error: code = NotFound desc = could not find container \"75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b\": container with ID starting with 75e25bcbde5b6d5477f418d310a09313cb8027d9f6162a225707f9ab5ed36b8b not found: ID does not exist" Feb 02 11:39:21 crc kubenswrapper[4845]: I0202 11:39:21.728318 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" path="/var/lib/kubelet/pods/67feb8df-07b7-4752-953d-fa9c66d6504f/volumes" Feb 02 11:39:46 crc kubenswrapper[4845]: I0202 11:39:46.238183 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:39:46 crc kubenswrapper[4845]: I0202 11:39:46.239185 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.688386 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-266w9"] Feb 02 11:40:07 crc kubenswrapper[4845]: E0202 11:40:07.689563 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="extract-content" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.689583 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="extract-content" Feb 02 11:40:07 crc kubenswrapper[4845]: E0202 11:40:07.689614 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="registry-server" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.689622 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="registry-server" Feb 02 11:40:07 crc kubenswrapper[4845]: E0202 11:40:07.689641 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="extract-utilities" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.689650 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="extract-utilities" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.691905 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="67feb8df-07b7-4752-953d-fa9c66d6504f" containerName="registry-server" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.694213 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.701939 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-266w9"] Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.704378 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-catalog-content\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.704464 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjng5\" (UniqueName: \"kubernetes.io/projected/2d007ebd-6c93-47d1-956b-7e27aab4bf22-kube-api-access-jjng5\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.704516 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-utilities\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.806047 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-catalog-content\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.806451 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjng5\" (UniqueName: \"kubernetes.io/projected/2d007ebd-6c93-47d1-956b-7e27aab4bf22-kube-api-access-jjng5\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.806483 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-utilities\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.806691 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-catalog-content\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.807060 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-utilities\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:07 crc kubenswrapper[4845]: I0202 11:40:07.828148 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjng5\" (UniqueName: \"kubernetes.io/projected/2d007ebd-6c93-47d1-956b-7e27aab4bf22-kube-api-access-jjng5\") pod \"redhat-operators-266w9\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:08 crc kubenswrapper[4845]: I0202 11:40:08.026847 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:08 crc kubenswrapper[4845]: I0202 11:40:08.606322 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-266w9"] Feb 02 11:40:09 crc kubenswrapper[4845]: I0202 11:40:09.556843 4845 generic.go:334] "Generic (PLEG): container finished" podID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerID="19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51" exitCode=0 Feb 02 11:40:09 crc kubenswrapper[4845]: I0202 11:40:09.557446 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-266w9" event={"ID":"2d007ebd-6c93-47d1-956b-7e27aab4bf22","Type":"ContainerDied","Data":"19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51"} Feb 02 11:40:09 crc kubenswrapper[4845]: I0202 11:40:09.557494 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-266w9" event={"ID":"2d007ebd-6c93-47d1-956b-7e27aab4bf22","Type":"ContainerStarted","Data":"c968d34b73243acb3c2e71d7297b25828170435ef550b0aa3c2183bf30a6c523"} Feb 02 11:40:11 crc kubenswrapper[4845]: I0202 11:40:11.585211 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-266w9" event={"ID":"2d007ebd-6c93-47d1-956b-7e27aab4bf22","Type":"ContainerStarted","Data":"cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185"} Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.237734 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.238514 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.238564 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.239483 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"800a02972d6ec813ad234012277534be95b3106d04c712ded96644b0403433e0"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.239558 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://800a02972d6ec813ad234012277534be95b3106d04c712ded96644b0403433e0" gracePeriod=600 Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.633034 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="800a02972d6ec813ad234012277534be95b3106d04c712ded96644b0403433e0" exitCode=0 Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.633107 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"800a02972d6ec813ad234012277534be95b3106d04c712ded96644b0403433e0"} Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.633147 4845 scope.go:117] "RemoveContainer" containerID="dd6b3930591fb138d17122481465759b520d0466bb89df22ab0dfde088cbcd5b" Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.635656 4845 generic.go:334] "Generic (PLEG): container finished" podID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerID="cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185" exitCode=0 Feb 02 11:40:16 crc kubenswrapper[4845]: I0202 11:40:16.635686 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-266w9" event={"ID":"2d007ebd-6c93-47d1-956b-7e27aab4bf22","Type":"ContainerDied","Data":"cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185"} Feb 02 11:40:17 crc kubenswrapper[4845]: I0202 11:40:17.650835 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e"} Feb 02 11:40:17 crc kubenswrapper[4845]: I0202 11:40:17.653677 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-266w9" event={"ID":"2d007ebd-6c93-47d1-956b-7e27aab4bf22","Type":"ContainerStarted","Data":"9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829"} Feb 02 11:40:17 crc kubenswrapper[4845]: I0202 11:40:17.688327 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-266w9" podStartSLOduration=3.210041388 podStartE2EDuration="10.688292868s" podCreationTimestamp="2026-02-02 11:40:07 +0000 UTC" firstStartedPulling="2026-02-02 11:40:09.559406606 +0000 UTC m=+4090.650808056" lastFinishedPulling="2026-02-02 11:40:17.037658086 +0000 UTC m=+4098.129059536" observedRunningTime="2026-02-02 11:40:17.685719014 +0000 UTC m=+4098.777120464" watchObservedRunningTime="2026-02-02 11:40:17.688292868 +0000 UTC m=+4098.779694318" Feb 02 11:40:18 crc kubenswrapper[4845]: I0202 11:40:18.027462 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:18 crc kubenswrapper[4845]: I0202 11:40:18.027639 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:19 crc kubenswrapper[4845]: I0202 11:40:19.079772 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-266w9" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="registry-server" probeResult="failure" output=< Feb 02 11:40:19 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 11:40:19 crc kubenswrapper[4845]: > Feb 02 11:40:28 crc kubenswrapper[4845]: I0202 11:40:28.083341 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:28 crc kubenswrapper[4845]: I0202 11:40:28.134578 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:28 crc kubenswrapper[4845]: I0202 11:40:28.320663 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-266w9"] Feb 02 11:40:29 crc kubenswrapper[4845]: I0202 11:40:29.770378 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-266w9" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="registry-server" containerID="cri-o://9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829" gracePeriod=2 Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.333871 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.364906 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-catalog-content\") pod \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.365293 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-utilities\") pod \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.365399 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjng5\" (UniqueName: \"kubernetes.io/projected/2d007ebd-6c93-47d1-956b-7e27aab4bf22-kube-api-access-jjng5\") pod \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\" (UID: \"2d007ebd-6c93-47d1-956b-7e27aab4bf22\") " Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.366437 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-utilities" (OuterVolumeSpecName: "utilities") pod "2d007ebd-6c93-47d1-956b-7e27aab4bf22" (UID: "2d007ebd-6c93-47d1-956b-7e27aab4bf22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.367494 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.382604 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d007ebd-6c93-47d1-956b-7e27aab4bf22-kube-api-access-jjng5" (OuterVolumeSpecName: "kube-api-access-jjng5") pod "2d007ebd-6c93-47d1-956b-7e27aab4bf22" (UID: "2d007ebd-6c93-47d1-956b-7e27aab4bf22"). InnerVolumeSpecName "kube-api-access-jjng5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.470797 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjng5\" (UniqueName: \"kubernetes.io/projected/2d007ebd-6c93-47d1-956b-7e27aab4bf22-kube-api-access-jjng5\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.496661 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d007ebd-6c93-47d1-956b-7e27aab4bf22" (UID: "2d007ebd-6c93-47d1-956b-7e27aab4bf22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.573000 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d007ebd-6c93-47d1-956b-7e27aab4bf22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.784178 4845 generic.go:334] "Generic (PLEG): container finished" podID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerID="9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829" exitCode=0 Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.784239 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-266w9" event={"ID":"2d007ebd-6c93-47d1-956b-7e27aab4bf22","Type":"ContainerDied","Data":"9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829"} Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.784277 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-266w9" event={"ID":"2d007ebd-6c93-47d1-956b-7e27aab4bf22","Type":"ContainerDied","Data":"c968d34b73243acb3c2e71d7297b25828170435ef550b0aa3c2183bf30a6c523"} Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.784298 4845 scope.go:117] "RemoveContainer" containerID="9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.785135 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-266w9" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.825089 4845 scope.go:117] "RemoveContainer" containerID="cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.830969 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-266w9"] Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.851358 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-266w9"] Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.860284 4845 scope.go:117] "RemoveContainer" containerID="19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.927511 4845 scope.go:117] "RemoveContainer" containerID="9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829" Feb 02 11:40:30 crc kubenswrapper[4845]: E0202 11:40:30.928248 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829\": container with ID starting with 9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829 not found: ID does not exist" containerID="9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.928337 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829"} err="failed to get container status \"9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829\": rpc error: code = NotFound desc = could not find container \"9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829\": container with ID starting with 9f85f47d7d53b247d2aaa9b2915576ddd222abfe2796babed6a11752fdb51829 not found: ID does not exist" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.928387 4845 scope.go:117] "RemoveContainer" containerID="cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185" Feb 02 11:40:30 crc kubenswrapper[4845]: E0202 11:40:30.928935 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185\": container with ID starting with cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185 not found: ID does not exist" containerID="cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.929102 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185"} err="failed to get container status \"cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185\": rpc error: code = NotFound desc = could not find container \"cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185\": container with ID starting with cbdd1b011726d9749fa73a1fbd542abedb6819140f686580ce200cc93604a185 not found: ID does not exist" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.929199 4845 scope.go:117] "RemoveContainer" containerID="19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51" Feb 02 11:40:30 crc kubenswrapper[4845]: E0202 11:40:30.929841 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51\": container with ID starting with 19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51 not found: ID does not exist" containerID="19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51" Feb 02 11:40:30 crc kubenswrapper[4845]: I0202 11:40:30.929899 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51"} err="failed to get container status \"19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51\": rpc error: code = NotFound desc = could not find container \"19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51\": container with ID starting with 19c112365c91ffcc492cc29784664b1d9fbca11452f55521f5b62254fd898f51 not found: ID does not exist" Feb 02 11:40:31 crc kubenswrapper[4845]: I0202 11:40:31.737605 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" path="/var/lib/kubelet/pods/2d007ebd-6c93-47d1-956b-7e27aab4bf22/volumes" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.486930 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9kpk6"] Feb 02 11:41:12 crc kubenswrapper[4845]: E0202 11:41:12.488144 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="extract-content" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.488166 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="extract-content" Feb 02 11:41:12 crc kubenswrapper[4845]: E0202 11:41:12.488184 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="registry-server" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.488192 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="registry-server" Feb 02 11:41:12 crc kubenswrapper[4845]: E0202 11:41:12.488208 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="extract-utilities" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.488218 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="extract-utilities" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.488574 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d007ebd-6c93-47d1-956b-7e27aab4bf22" containerName="registry-server" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.490666 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.502944 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9kpk6"] Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.638001 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-catalog-content\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.638253 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-utilities\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.638400 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9x9\" (UniqueName: \"kubernetes.io/projected/47fee595-38a0-4778-a0f3-e9e4c5004787-kube-api-access-mr9x9\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.741425 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-catalog-content\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.741546 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-utilities\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.741604 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9x9\" (UniqueName: \"kubernetes.io/projected/47fee595-38a0-4778-a0f3-e9e4c5004787-kube-api-access-mr9x9\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.742086 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-catalog-content\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.742151 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-utilities\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.773327 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9x9\" (UniqueName: \"kubernetes.io/projected/47fee595-38a0-4778-a0f3-e9e4c5004787-kube-api-access-mr9x9\") pod \"community-operators-9kpk6\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:12 crc kubenswrapper[4845]: I0202 11:41:12.820251 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:13 crc kubenswrapper[4845]: I0202 11:41:13.490626 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9kpk6"] Feb 02 11:41:14 crc kubenswrapper[4845]: I0202 11:41:14.257840 4845 generic.go:334] "Generic (PLEG): container finished" podID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerID="5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3" exitCode=0 Feb 02 11:41:14 crc kubenswrapper[4845]: I0202 11:41:14.258406 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kpk6" event={"ID":"47fee595-38a0-4778-a0f3-e9e4c5004787","Type":"ContainerDied","Data":"5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3"} Feb 02 11:41:14 crc kubenswrapper[4845]: I0202 11:41:14.258465 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kpk6" event={"ID":"47fee595-38a0-4778-a0f3-e9e4c5004787","Type":"ContainerStarted","Data":"f859c419b8dc176d5884a145bc04659a1fcf18f030931df90991f91e0ffc55cb"} Feb 02 11:41:16 crc kubenswrapper[4845]: I0202 11:41:16.287705 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kpk6" event={"ID":"47fee595-38a0-4778-a0f3-e9e4c5004787","Type":"ContainerStarted","Data":"8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c"} Feb 02 11:41:17 crc kubenswrapper[4845]: I0202 11:41:17.301862 4845 generic.go:334] "Generic (PLEG): container finished" podID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerID="8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c" exitCode=0 Feb 02 11:41:17 crc kubenswrapper[4845]: I0202 11:41:17.301929 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kpk6" event={"ID":"47fee595-38a0-4778-a0f3-e9e4c5004787","Type":"ContainerDied","Data":"8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c"} Feb 02 11:41:19 crc kubenswrapper[4845]: I0202 11:41:19.332997 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kpk6" event={"ID":"47fee595-38a0-4778-a0f3-e9e4c5004787","Type":"ContainerStarted","Data":"7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3"} Feb 02 11:41:19 crc kubenswrapper[4845]: I0202 11:41:19.360643 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9kpk6" podStartSLOduration=3.567388535 podStartE2EDuration="7.36062426s" podCreationTimestamp="2026-02-02 11:41:12 +0000 UTC" firstStartedPulling="2026-02-02 11:41:14.266133771 +0000 UTC m=+4155.357535221" lastFinishedPulling="2026-02-02 11:41:18.059369496 +0000 UTC m=+4159.150770946" observedRunningTime="2026-02-02 11:41:19.357872071 +0000 UTC m=+4160.449273521" watchObservedRunningTime="2026-02-02 11:41:19.36062426 +0000 UTC m=+4160.452025720" Feb 02 11:41:22 crc kubenswrapper[4845]: I0202 11:41:22.823677 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:22 crc kubenswrapper[4845]: I0202 11:41:22.824303 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:22 crc kubenswrapper[4845]: I0202 11:41:22.876541 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.246802 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n48dz"] Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.251722 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.276664 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n48dz"] Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.387222 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-utilities\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.387308 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-catalog-content\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.387384 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xjpn\" (UniqueName: \"kubernetes.io/projected/aabb599d-7f64-43cf-8580-ae0f70c90035-kube-api-access-4xjpn\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.491017 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xjpn\" (UniqueName: \"kubernetes.io/projected/aabb599d-7f64-43cf-8580-ae0f70c90035-kube-api-access-4xjpn\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.491370 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-utilities\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.491469 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-catalog-content\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.492089 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-utilities\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.492131 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-catalog-content\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.512628 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xjpn\" (UniqueName: \"kubernetes.io/projected/aabb599d-7f64-43cf-8580-ae0f70c90035-kube-api-access-4xjpn\") pod \"certified-operators-n48dz\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:29 crc kubenswrapper[4845]: I0202 11:41:29.578121 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:30 crc kubenswrapper[4845]: I0202 11:41:30.180771 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n48dz"] Feb 02 11:41:30 crc kubenswrapper[4845]: W0202 11:41:30.847920 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaabb599d_7f64_43cf_8580_ae0f70c90035.slice/crio-2c6b56543eb6367bcd8aaa40ba7660a352f53072e103c5d049826a8cca5fee21 WatchSource:0}: Error finding container 2c6b56543eb6367bcd8aaa40ba7660a352f53072e103c5d049826a8cca5fee21: Status 404 returned error can't find the container with id 2c6b56543eb6367bcd8aaa40ba7660a352f53072e103c5d049826a8cca5fee21 Feb 02 11:41:31 crc kubenswrapper[4845]: I0202 11:41:31.459264 4845 generic.go:334] "Generic (PLEG): container finished" podID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerID="d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d" exitCode=0 Feb 02 11:41:31 crc kubenswrapper[4845]: I0202 11:41:31.459328 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48dz" event={"ID":"aabb599d-7f64-43cf-8580-ae0f70c90035","Type":"ContainerDied","Data":"d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d"} Feb 02 11:41:31 crc kubenswrapper[4845]: I0202 11:41:31.460292 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48dz" event={"ID":"aabb599d-7f64-43cf-8580-ae0f70c90035","Type":"ContainerStarted","Data":"2c6b56543eb6367bcd8aaa40ba7660a352f53072e103c5d049826a8cca5fee21"} Feb 02 11:41:32 crc kubenswrapper[4845]: I0202 11:41:32.880599 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:33 crc kubenswrapper[4845]: I0202 11:41:33.488801 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48dz" event={"ID":"aabb599d-7f64-43cf-8580-ae0f70c90035","Type":"ContainerStarted","Data":"1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390"} Feb 02 11:41:33 crc kubenswrapper[4845]: I0202 11:41:33.630246 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9kpk6"] Feb 02 11:41:33 crc kubenswrapper[4845]: I0202 11:41:33.630639 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9kpk6" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerName="registry-server" containerID="cri-o://7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3" gracePeriod=2 Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.294749 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.478437 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr9x9\" (UniqueName: \"kubernetes.io/projected/47fee595-38a0-4778-a0f3-e9e4c5004787-kube-api-access-mr9x9\") pod \"47fee595-38a0-4778-a0f3-e9e4c5004787\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.478796 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-catalog-content\") pod \"47fee595-38a0-4778-a0f3-e9e4c5004787\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.478836 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-utilities\") pod \"47fee595-38a0-4778-a0f3-e9e4c5004787\" (UID: \"47fee595-38a0-4778-a0f3-e9e4c5004787\") " Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.480170 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-utilities" (OuterVolumeSpecName: "utilities") pod "47fee595-38a0-4778-a0f3-e9e4c5004787" (UID: "47fee595-38a0-4778-a0f3-e9e4c5004787"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.484103 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47fee595-38a0-4778-a0f3-e9e4c5004787-kube-api-access-mr9x9" (OuterVolumeSpecName: "kube-api-access-mr9x9") pod "47fee595-38a0-4778-a0f3-e9e4c5004787" (UID: "47fee595-38a0-4778-a0f3-e9e4c5004787"). InnerVolumeSpecName "kube-api-access-mr9x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.502295 4845 generic.go:334] "Generic (PLEG): container finished" podID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerID="7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3" exitCode=0 Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.503032 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9kpk6" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.503464 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kpk6" event={"ID":"47fee595-38a0-4778-a0f3-e9e4c5004787","Type":"ContainerDied","Data":"7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3"} Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.503505 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9kpk6" event={"ID":"47fee595-38a0-4778-a0f3-e9e4c5004787","Type":"ContainerDied","Data":"f859c419b8dc176d5884a145bc04659a1fcf18f030931df90991f91e0ffc55cb"} Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.503527 4845 scope.go:117] "RemoveContainer" containerID="7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.542299 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47fee595-38a0-4778-a0f3-e9e4c5004787" (UID: "47fee595-38a0-4778-a0f3-e9e4c5004787"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.561661 4845 scope.go:117] "RemoveContainer" containerID="8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.582136 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr9x9\" (UniqueName: \"kubernetes.io/projected/47fee595-38a0-4778-a0f3-e9e4c5004787-kube-api-access-mr9x9\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.582172 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.582182 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47fee595-38a0-4778-a0f3-e9e4c5004787-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.633302 4845 scope.go:117] "RemoveContainer" containerID="5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.693381 4845 scope.go:117] "RemoveContainer" containerID="7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3" Feb 02 11:41:34 crc kubenswrapper[4845]: E0202 11:41:34.694051 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3\": container with ID starting with 7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3 not found: ID does not exist" containerID="7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.694123 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3"} err="failed to get container status \"7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3\": rpc error: code = NotFound desc = could not find container \"7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3\": container with ID starting with 7f4c32c86c81432e75b8b963e39f54cdbd13f2970162f200c9c79831d58966f3 not found: ID does not exist" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.694163 4845 scope.go:117] "RemoveContainer" containerID="8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c" Feb 02 11:41:34 crc kubenswrapper[4845]: E0202 11:41:34.694716 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c\": container with ID starting with 8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c not found: ID does not exist" containerID="8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.694770 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c"} err="failed to get container status \"8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c\": rpc error: code = NotFound desc = could not find container \"8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c\": container with ID starting with 8c6b33ba6ec986656edb201decc27f2cdd09c822f842166087490f5082946d2c not found: ID does not exist" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.694812 4845 scope.go:117] "RemoveContainer" containerID="5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3" Feb 02 11:41:34 crc kubenswrapper[4845]: E0202 11:41:34.695161 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3\": container with ID starting with 5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3 not found: ID does not exist" containerID="5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.695198 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3"} err="failed to get container status \"5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3\": rpc error: code = NotFound desc = could not find container \"5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3\": container with ID starting with 5eae645488d5eae73f94ff42e591bcb15170f085ebc2b4587ea7c2c56f1f59b3 not found: ID does not exist" Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.844841 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9kpk6"] Feb 02 11:41:34 crc kubenswrapper[4845]: I0202 11:41:34.865850 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9kpk6"] Feb 02 11:41:35 crc kubenswrapper[4845]: I0202 11:41:35.521318 4845 generic.go:334] "Generic (PLEG): container finished" podID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerID="1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390" exitCode=0 Feb 02 11:41:35 crc kubenswrapper[4845]: I0202 11:41:35.521404 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48dz" event={"ID":"aabb599d-7f64-43cf-8580-ae0f70c90035","Type":"ContainerDied","Data":"1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390"} Feb 02 11:41:35 crc kubenswrapper[4845]: I0202 11:41:35.729128 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" path="/var/lib/kubelet/pods/47fee595-38a0-4778-a0f3-e9e4c5004787/volumes" Feb 02 11:41:36 crc kubenswrapper[4845]: I0202 11:41:36.535668 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48dz" event={"ID":"aabb599d-7f64-43cf-8580-ae0f70c90035","Type":"ContainerStarted","Data":"858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7"} Feb 02 11:41:36 crc kubenswrapper[4845]: I0202 11:41:36.569614 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n48dz" podStartSLOduration=3.069253344 podStartE2EDuration="7.569589525s" podCreationTimestamp="2026-02-02 11:41:29 +0000 UTC" firstStartedPulling="2026-02-02 11:41:31.461700511 +0000 UTC m=+4172.553101961" lastFinishedPulling="2026-02-02 11:41:35.962036692 +0000 UTC m=+4177.053438142" observedRunningTime="2026-02-02 11:41:36.554909073 +0000 UTC m=+4177.646310523" watchObservedRunningTime="2026-02-02 11:41:36.569589525 +0000 UTC m=+4177.660990965" Feb 02 11:41:39 crc kubenswrapper[4845]: I0202 11:41:39.578783 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:39 crc kubenswrapper[4845]: I0202 11:41:39.579213 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:39 crc kubenswrapper[4845]: I0202 11:41:39.634126 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:49 crc kubenswrapper[4845]: I0202 11:41:49.629546 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:49 crc kubenswrapper[4845]: I0202 11:41:49.702400 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n48dz"] Feb 02 11:41:49 crc kubenswrapper[4845]: I0202 11:41:49.702794 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n48dz" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerName="registry-server" containerID="cri-o://858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7" gracePeriod=2 Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.265095 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.301287 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-utilities\") pod \"aabb599d-7f64-43cf-8580-ae0f70c90035\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.301360 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-catalog-content\") pod \"aabb599d-7f64-43cf-8580-ae0f70c90035\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.301533 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xjpn\" (UniqueName: \"kubernetes.io/projected/aabb599d-7f64-43cf-8580-ae0f70c90035-kube-api-access-4xjpn\") pod \"aabb599d-7f64-43cf-8580-ae0f70c90035\" (UID: \"aabb599d-7f64-43cf-8580-ae0f70c90035\") " Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.303001 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-utilities" (OuterVolumeSpecName: "utilities") pod "aabb599d-7f64-43cf-8580-ae0f70c90035" (UID: "aabb599d-7f64-43cf-8580-ae0f70c90035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.308952 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabb599d-7f64-43cf-8580-ae0f70c90035-kube-api-access-4xjpn" (OuterVolumeSpecName: "kube-api-access-4xjpn") pod "aabb599d-7f64-43cf-8580-ae0f70c90035" (UID: "aabb599d-7f64-43cf-8580-ae0f70c90035"). InnerVolumeSpecName "kube-api-access-4xjpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.378584 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aabb599d-7f64-43cf-8580-ae0f70c90035" (UID: "aabb599d-7f64-43cf-8580-ae0f70c90035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.405095 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.405133 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabb599d-7f64-43cf-8580-ae0f70c90035-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.405145 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xjpn\" (UniqueName: \"kubernetes.io/projected/aabb599d-7f64-43cf-8580-ae0f70c90035-kube-api-access-4xjpn\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.689396 4845 generic.go:334] "Generic (PLEG): container finished" podID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerID="858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7" exitCode=0 Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.689469 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48dz" event={"ID":"aabb599d-7f64-43cf-8580-ae0f70c90035","Type":"ContainerDied","Data":"858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7"} Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.689774 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n48dz" event={"ID":"aabb599d-7f64-43cf-8580-ae0f70c90035","Type":"ContainerDied","Data":"2c6b56543eb6367bcd8aaa40ba7660a352f53072e103c5d049826a8cca5fee21"} Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.689803 4845 scope.go:117] "RemoveContainer" containerID="858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.689497 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n48dz" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.725367 4845 scope.go:117] "RemoveContainer" containerID="1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.734203 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n48dz"] Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.745513 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n48dz"] Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.751698 4845 scope.go:117] "RemoveContainer" containerID="d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.819649 4845 scope.go:117] "RemoveContainer" containerID="858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7" Feb 02 11:41:50 crc kubenswrapper[4845]: E0202 11:41:50.820087 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7\": container with ID starting with 858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7 not found: ID does not exist" containerID="858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.820124 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7"} err="failed to get container status \"858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7\": rpc error: code = NotFound desc = could not find container \"858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7\": container with ID starting with 858e2ad35137cabc57496d24f9faadab9da1de8d79e3279363178d2796ca85c7 not found: ID does not exist" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.820148 4845 scope.go:117] "RemoveContainer" containerID="1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390" Feb 02 11:41:50 crc kubenswrapper[4845]: E0202 11:41:50.820375 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390\": container with ID starting with 1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390 not found: ID does not exist" containerID="1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.820397 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390"} err="failed to get container status \"1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390\": rpc error: code = NotFound desc = could not find container \"1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390\": container with ID starting with 1fbe7e6b5331292a7b57c3a153e77040eb16b249e6958ff12337b94b4aeda390 not found: ID does not exist" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.820427 4845 scope.go:117] "RemoveContainer" containerID="d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d" Feb 02 11:41:50 crc kubenswrapper[4845]: E0202 11:41:50.820672 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d\": container with ID starting with d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d not found: ID does not exist" containerID="d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d" Feb 02 11:41:50 crc kubenswrapper[4845]: I0202 11:41:50.820701 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d"} err="failed to get container status \"d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d\": rpc error: code = NotFound desc = could not find container \"d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d\": container with ID starting with d516417ab0b6ce99ddf176a27417c14b42869cbf109dc31e5d52f4a8f17c836d not found: ID does not exist" Feb 02 11:41:51 crc kubenswrapper[4845]: I0202 11:41:51.729322 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" path="/var/lib/kubelet/pods/aabb599d-7f64-43cf-8580-ae0f70c90035/volumes" Feb 02 11:42:46 crc kubenswrapper[4845]: I0202 11:42:46.237369 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:42:46 crc kubenswrapper[4845]: I0202 11:42:46.238056 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:43:16 crc kubenswrapper[4845]: I0202 11:43:16.237786 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:43:16 crc kubenswrapper[4845]: I0202 11:43:16.238419 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.237859 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.238477 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.238556 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.240677 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.240748 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" gracePeriod=600 Feb 02 11:43:46 crc kubenswrapper[4845]: E0202 11:43:46.368177 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.958020 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" exitCode=0 Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.958074 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e"} Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.958117 4845 scope.go:117] "RemoveContainer" containerID="800a02972d6ec813ad234012277534be95b3106d04c712ded96644b0403433e0" Feb 02 11:43:46 crc kubenswrapper[4845]: I0202 11:43:46.959217 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:43:46 crc kubenswrapper[4845]: E0202 11:43:46.959646 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:44:00 crc kubenswrapper[4845]: I0202 11:44:00.712757 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:44:00 crc kubenswrapper[4845]: E0202 11:44:00.713997 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:44:12 crc kubenswrapper[4845]: I0202 11:44:12.714195 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:44:12 crc kubenswrapper[4845]: E0202 11:44:12.715386 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:44:25 crc kubenswrapper[4845]: I0202 11:44:25.713150 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:44:25 crc kubenswrapper[4845]: E0202 11:44:25.714195 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:44:37 crc kubenswrapper[4845]: I0202 11:44:37.714139 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:44:37 crc kubenswrapper[4845]: E0202 11:44:37.716366 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:44:52 crc kubenswrapper[4845]: I0202 11:44:52.714562 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:44:52 crc kubenswrapper[4845]: E0202 11:44:52.716419 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.183957 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d"] Feb 02 11:45:00 crc kubenswrapper[4845]: E0202 11:45:00.185209 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.185235 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4845]: E0202 11:45:00.185254 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.185260 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4845]: E0202 11:45:00.185280 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.185286 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4845]: E0202 11:45:00.185299 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.185305 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4845]: E0202 11:45:00.185313 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.185319 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4845]: E0202 11:45:00.185377 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.185388 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.185654 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="47fee595-38a0-4778-a0f3-e9e4c5004787" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.185684 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabb599d-7f64-43cf-8580-ae0f70c90035" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.186761 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.189241 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.191042 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.196915 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d"] Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.217450 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-secret-volume\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.217658 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxqk5\" (UniqueName: \"kubernetes.io/projected/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-kube-api-access-zxqk5\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.217771 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-config-volume\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.319833 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-secret-volume\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.319964 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxqk5\" (UniqueName: \"kubernetes.io/projected/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-kube-api-access-zxqk5\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.320019 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-config-volume\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.321051 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-config-volume\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.326135 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-secret-volume\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.340133 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxqk5\" (UniqueName: \"kubernetes.io/projected/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-kube-api-access-zxqk5\") pod \"collect-profiles-29500545-wz76d\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:00 crc kubenswrapper[4845]: I0202 11:45:00.519542 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:01 crc kubenswrapper[4845]: I0202 11:45:01.041754 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d"] Feb 02 11:45:01 crc kubenswrapper[4845]: I0202 11:45:01.762808 4845 generic.go:334] "Generic (PLEG): container finished" podID="ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a" containerID="e84bf218f680128502516aca2b553261d33f0d15f1e0480a845a83cff580ec8e" exitCode=0 Feb 02 11:45:01 crc kubenswrapper[4845]: I0202 11:45:01.762852 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" event={"ID":"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a","Type":"ContainerDied","Data":"e84bf218f680128502516aca2b553261d33f0d15f1e0480a845a83cff580ec8e"} Feb 02 11:45:01 crc kubenswrapper[4845]: I0202 11:45:01.763100 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" event={"ID":"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a","Type":"ContainerStarted","Data":"2952da291d63c09a95bde3fdb6ca7d09f64a7bc5ff5cad303ce2d5892d497cf0"} Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.222671 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.400756 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxqk5\" (UniqueName: \"kubernetes.io/projected/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-kube-api-access-zxqk5\") pod \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.400859 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-secret-volume\") pod \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.401160 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-config-volume\") pod \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\" (UID: \"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a\") " Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.402182 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a" (UID: "ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.407557 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a" (UID: "ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.409201 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-kube-api-access-zxqk5" (OuterVolumeSpecName: "kube-api-access-zxqk5") pod "ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a" (UID: "ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a"). InnerVolumeSpecName "kube-api-access-zxqk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.504215 4845 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.504255 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxqk5\" (UniqueName: \"kubernetes.io/projected/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-kube-api-access-zxqk5\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.504268 4845 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.789312 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" event={"ID":"ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a","Type":"ContainerDied","Data":"2952da291d63c09a95bde3fdb6ca7d09f64a7bc5ff5cad303ce2d5892d497cf0"} Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.789365 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2952da291d63c09a95bde3fdb6ca7d09f64a7bc5ff5cad303ce2d5892d497cf0" Feb 02 11:45:03 crc kubenswrapper[4845]: I0202 11:45:03.789403 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-wz76d" Feb 02 11:45:04 crc kubenswrapper[4845]: I0202 11:45:04.320448 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb"] Feb 02 11:45:04 crc kubenswrapper[4845]: I0202 11:45:04.331566 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-g5gxb"] Feb 02 11:45:05 crc kubenswrapper[4845]: I0202 11:45:05.729906 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b535e9d-4510-4191-9ab5-768d449b7bc3" path="/var/lib/kubelet/pods/6b535e9d-4510-4191-9ab5-768d449b7bc3/volumes" Feb 02 11:45:07 crc kubenswrapper[4845]: I0202 11:45:07.713771 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:45:07 crc kubenswrapper[4845]: E0202 11:45:07.714432 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:45:11 crc kubenswrapper[4845]: I0202 11:45:11.752269 4845 scope.go:117] "RemoveContainer" containerID="e8b882d2679f84fe117d5aa26326dd7526ca6a2f74a7144d1db5d6a93a707b5a" Feb 02 11:45:19 crc kubenswrapper[4845]: I0202 11:45:19.723319 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:45:19 crc kubenswrapper[4845]: E0202 11:45:19.724223 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:45:30 crc kubenswrapper[4845]: I0202 11:45:30.713531 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:45:30 crc kubenswrapper[4845]: E0202 11:45:30.714585 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:45:45 crc kubenswrapper[4845]: I0202 11:45:45.712741 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:45:45 crc kubenswrapper[4845]: E0202 11:45:45.713567 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:46:00 crc kubenswrapper[4845]: I0202 11:46:00.714328 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:46:00 crc kubenswrapper[4845]: E0202 11:46:00.716072 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:46:13 crc kubenswrapper[4845]: I0202 11:46:13.713138 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:46:13 crc kubenswrapper[4845]: E0202 11:46:13.714077 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:46:26 crc kubenswrapper[4845]: I0202 11:46:26.713997 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:46:26 crc kubenswrapper[4845]: E0202 11:46:26.714955 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:46:41 crc kubenswrapper[4845]: I0202 11:46:41.713606 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:46:41 crc kubenswrapper[4845]: E0202 11:46:41.715138 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:46:53 crc kubenswrapper[4845]: I0202 11:46:53.713397 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:46:53 crc kubenswrapper[4845]: E0202 11:46:53.714301 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:47:06 crc kubenswrapper[4845]: I0202 11:47:06.714142 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:47:06 crc kubenswrapper[4845]: E0202 11:47:06.716003 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:47:17 crc kubenswrapper[4845]: I0202 11:47:17.714293 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:47:17 crc kubenswrapper[4845]: E0202 11:47:17.716834 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:47:30 crc kubenswrapper[4845]: I0202 11:47:30.713210 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:47:30 crc kubenswrapper[4845]: E0202 11:47:30.714031 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:47:41 crc kubenswrapper[4845]: I0202 11:47:41.713902 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:47:41 crc kubenswrapper[4845]: E0202 11:47:41.715007 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:47:53 crc kubenswrapper[4845]: I0202 11:47:53.713158 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:47:53 crc kubenswrapper[4845]: E0202 11:47:53.714095 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:48:07 crc kubenswrapper[4845]: I0202 11:48:07.713001 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:48:07 crc kubenswrapper[4845]: E0202 11:48:07.713745 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:48:19 crc kubenswrapper[4845]: I0202 11:48:19.725532 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:48:19 crc kubenswrapper[4845]: E0202 11:48:19.728182 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:48:32 crc kubenswrapper[4845]: I0202 11:48:32.712699 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:48:32 crc kubenswrapper[4845]: E0202 11:48:32.714760 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:48:47 crc kubenswrapper[4845]: I0202 11:48:47.713298 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:48:48 crc kubenswrapper[4845]: I0202 11:48:48.366813 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"4667d56dc23295df8831dac2a0e953a5f6c9c92d45b3e4effcfbe305f29defc5"} Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.562903 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zsl9w"] Feb 02 11:49:27 crc kubenswrapper[4845]: E0202 11:49:27.564332 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a" containerName="collect-profiles" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.564353 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a" containerName="collect-profiles" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.564647 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9d5873-9ee1-49b9-9cc7-ebed0a0a0c7a" containerName="collect-profiles" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.566919 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.578491 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsl9w"] Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.649284 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrxd\" (UniqueName: \"kubernetes.io/projected/3b4186e3-63c2-40ae-8303-6f87b8e32247-kube-api-access-dfrxd\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.649354 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-catalog-content\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.649551 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-utilities\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.763331 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-utilities\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.763837 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrxd\" (UniqueName: \"kubernetes.io/projected/3b4186e3-63c2-40ae-8303-6f87b8e32247-kube-api-access-dfrxd\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.763904 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-utilities\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.763932 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-catalog-content\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.764384 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-catalog-content\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.790816 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrxd\" (UniqueName: \"kubernetes.io/projected/3b4186e3-63c2-40ae-8303-6f87b8e32247-kube-api-access-dfrxd\") pod \"redhat-marketplace-zsl9w\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:27 crc kubenswrapper[4845]: I0202 11:49:27.899319 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:28 crc kubenswrapper[4845]: I0202 11:49:28.490407 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsl9w"] Feb 02 11:49:28 crc kubenswrapper[4845]: I0202 11:49:28.780035 4845 generic.go:334] "Generic (PLEG): container finished" podID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerID="ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1" exitCode=0 Feb 02 11:49:28 crc kubenswrapper[4845]: I0202 11:49:28.780107 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsl9w" event={"ID":"3b4186e3-63c2-40ae-8303-6f87b8e32247","Type":"ContainerDied","Data":"ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1"} Feb 02 11:49:28 crc kubenswrapper[4845]: I0202 11:49:28.780417 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsl9w" event={"ID":"3b4186e3-63c2-40ae-8303-6f87b8e32247","Type":"ContainerStarted","Data":"9bbd4f552c2c3759ffc1f28e4eaea61e8145683e0d4c4747293158c6d5f24d0d"} Feb 02 11:49:28 crc kubenswrapper[4845]: I0202 11:49:28.782077 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:49:29 crc kubenswrapper[4845]: I0202 11:49:29.794154 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsl9w" event={"ID":"3b4186e3-63c2-40ae-8303-6f87b8e32247","Type":"ContainerStarted","Data":"958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26"} Feb 02 11:49:30 crc kubenswrapper[4845]: I0202 11:49:30.806613 4845 generic.go:334] "Generic (PLEG): container finished" podID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerID="958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26" exitCode=0 Feb 02 11:49:30 crc kubenswrapper[4845]: I0202 11:49:30.806836 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsl9w" event={"ID":"3b4186e3-63c2-40ae-8303-6f87b8e32247","Type":"ContainerDied","Data":"958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26"} Feb 02 11:49:31 crc kubenswrapper[4845]: I0202 11:49:31.819631 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsl9w" event={"ID":"3b4186e3-63c2-40ae-8303-6f87b8e32247","Type":"ContainerStarted","Data":"646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6"} Feb 02 11:49:31 crc kubenswrapper[4845]: I0202 11:49:31.848687 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zsl9w" podStartSLOduration=2.2663620460000002 podStartE2EDuration="4.848659734s" podCreationTimestamp="2026-02-02 11:49:27 +0000 UTC" firstStartedPulling="2026-02-02 11:49:28.781821568 +0000 UTC m=+4649.873223008" lastFinishedPulling="2026-02-02 11:49:31.364119246 +0000 UTC m=+4652.455520696" observedRunningTime="2026-02-02 11:49:31.842502577 +0000 UTC m=+4652.933904027" watchObservedRunningTime="2026-02-02 11:49:31.848659734 +0000 UTC m=+4652.940061184" Feb 02 11:49:37 crc kubenswrapper[4845]: I0202 11:49:37.900292 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:37 crc kubenswrapper[4845]: I0202 11:49:37.900843 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:37 crc kubenswrapper[4845]: I0202 11:49:37.955068 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:38 crc kubenswrapper[4845]: I0202 11:49:38.954410 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:39 crc kubenswrapper[4845]: I0202 11:49:39.014520 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsl9w"] Feb 02 11:49:40 crc kubenswrapper[4845]: I0202 11:49:40.912967 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zsl9w" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerName="registry-server" containerID="cri-o://646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6" gracePeriod=2 Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.496648 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.623375 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-catalog-content\") pod \"3b4186e3-63c2-40ae-8303-6f87b8e32247\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.623904 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfrxd\" (UniqueName: \"kubernetes.io/projected/3b4186e3-63c2-40ae-8303-6f87b8e32247-kube-api-access-dfrxd\") pod \"3b4186e3-63c2-40ae-8303-6f87b8e32247\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.624014 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-utilities\") pod \"3b4186e3-63c2-40ae-8303-6f87b8e32247\" (UID: \"3b4186e3-63c2-40ae-8303-6f87b8e32247\") " Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.625193 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-utilities" (OuterVolumeSpecName: "utilities") pod "3b4186e3-63c2-40ae-8303-6f87b8e32247" (UID: "3b4186e3-63c2-40ae-8303-6f87b8e32247"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.631582 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4186e3-63c2-40ae-8303-6f87b8e32247-kube-api-access-dfrxd" (OuterVolumeSpecName: "kube-api-access-dfrxd") pod "3b4186e3-63c2-40ae-8303-6f87b8e32247" (UID: "3b4186e3-63c2-40ae-8303-6f87b8e32247"). InnerVolumeSpecName "kube-api-access-dfrxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.653676 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b4186e3-63c2-40ae-8303-6f87b8e32247" (UID: "3b4186e3-63c2-40ae-8303-6f87b8e32247"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.729237 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.729534 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4186e3-63c2-40ae-8303-6f87b8e32247-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.729631 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfrxd\" (UniqueName: \"kubernetes.io/projected/3b4186e3-63c2-40ae-8303-6f87b8e32247-kube-api-access-dfrxd\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.932594 4845 generic.go:334] "Generic (PLEG): container finished" podID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerID="646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6" exitCode=0 Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.932932 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsl9w" event={"ID":"3b4186e3-63c2-40ae-8303-6f87b8e32247","Type":"ContainerDied","Data":"646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6"} Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.932960 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsl9w" event={"ID":"3b4186e3-63c2-40ae-8303-6f87b8e32247","Type":"ContainerDied","Data":"9bbd4f552c2c3759ffc1f28e4eaea61e8145683e0d4c4747293158c6d5f24d0d"} Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.932978 4845 scope.go:117] "RemoveContainer" containerID="646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.933153 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsl9w" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.968764 4845 scope.go:117] "RemoveContainer" containerID="958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26" Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.981436 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsl9w"] Feb 02 11:49:41 crc kubenswrapper[4845]: I0202 11:49:41.990936 4845 scope.go:117] "RemoveContainer" containerID="ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1" Feb 02 11:49:42 crc kubenswrapper[4845]: I0202 11:49:41.996862 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsl9w"] Feb 02 11:49:42 crc kubenswrapper[4845]: I0202 11:49:42.053922 4845 scope.go:117] "RemoveContainer" containerID="646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6" Feb 02 11:49:42 crc kubenswrapper[4845]: E0202 11:49:42.056149 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6\": container with ID starting with 646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6 not found: ID does not exist" containerID="646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6" Feb 02 11:49:42 crc kubenswrapper[4845]: I0202 11:49:42.056199 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6"} err="failed to get container status \"646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6\": rpc error: code = NotFound desc = could not find container \"646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6\": container with ID starting with 646c431ceeeb23b04c98986f17e9845a9be91c82dc4b1423606fc9bcefe75bb6 not found: ID does not exist" Feb 02 11:49:42 crc kubenswrapper[4845]: I0202 11:49:42.056228 4845 scope.go:117] "RemoveContainer" containerID="958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26" Feb 02 11:49:42 crc kubenswrapper[4845]: E0202 11:49:42.059116 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26\": container with ID starting with 958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26 not found: ID does not exist" containerID="958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26" Feb 02 11:49:42 crc kubenswrapper[4845]: I0202 11:49:42.059155 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26"} err="failed to get container status \"958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26\": rpc error: code = NotFound desc = could not find container \"958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26\": container with ID starting with 958770bf08d0e5aadf525e0aa885f2dff0de3e468a1514af21597dbe72339a26 not found: ID does not exist" Feb 02 11:49:42 crc kubenswrapper[4845]: I0202 11:49:42.059179 4845 scope.go:117] "RemoveContainer" containerID="ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1" Feb 02 11:49:42 crc kubenswrapper[4845]: E0202 11:49:42.059591 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1\": container with ID starting with ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1 not found: ID does not exist" containerID="ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1" Feb 02 11:49:42 crc kubenswrapper[4845]: I0202 11:49:42.059615 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1"} err="failed to get container status \"ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1\": rpc error: code = NotFound desc = could not find container \"ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1\": container with ID starting with ad5535d576e128d2308fb411ec8c8d2684ef48e47342e89c0a835295b3e0b0a1 not found: ID does not exist" Feb 02 11:49:43 crc kubenswrapper[4845]: I0202 11:49:43.727089 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" path="/var/lib/kubelet/pods/3b4186e3-63c2-40ae-8303-6f87b8e32247/volumes" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.564527 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hsjdb"] Feb 02 11:50:24 crc kubenswrapper[4845]: E0202 11:50:24.565726 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerName="extract-utilities" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.565745 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerName="extract-utilities" Feb 02 11:50:24 crc kubenswrapper[4845]: E0202 11:50:24.565766 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerName="registry-server" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.565774 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerName="registry-server" Feb 02 11:50:24 crc kubenswrapper[4845]: E0202 11:50:24.565797 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerName="extract-content" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.565805 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerName="extract-content" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.566182 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4186e3-63c2-40ae-8303-6f87b8e32247" containerName="registry-server" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.568152 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.578828 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsjdb"] Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.724977 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q9vs\" (UniqueName: \"kubernetes.io/projected/75a56863-1da1-4828-8e74-01d392b7c313-kube-api-access-8q9vs\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.725162 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-catalog-content\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.725225 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-utilities\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.827762 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q9vs\" (UniqueName: \"kubernetes.io/projected/75a56863-1da1-4828-8e74-01d392b7c313-kube-api-access-8q9vs\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.827854 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-catalog-content\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.827914 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-utilities\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.828664 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-catalog-content\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.828910 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-utilities\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.855113 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q9vs\" (UniqueName: \"kubernetes.io/projected/75a56863-1da1-4828-8e74-01d392b7c313-kube-api-access-8q9vs\") pod \"redhat-operators-hsjdb\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:24 crc kubenswrapper[4845]: I0202 11:50:24.928452 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:25 crc kubenswrapper[4845]: I0202 11:50:25.448060 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsjdb"] Feb 02 11:50:26 crc kubenswrapper[4845]: I0202 11:50:26.393753 4845 generic.go:334] "Generic (PLEG): container finished" podID="75a56863-1da1-4828-8e74-01d392b7c313" containerID="403a9328fa4fcfabf4647f93c511ea9eb8de34eced9a1a5d2bb052a8e9e9f78d" exitCode=0 Feb 02 11:50:26 crc kubenswrapper[4845]: I0202 11:50:26.394114 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsjdb" event={"ID":"75a56863-1da1-4828-8e74-01d392b7c313","Type":"ContainerDied","Data":"403a9328fa4fcfabf4647f93c511ea9eb8de34eced9a1a5d2bb052a8e9e9f78d"} Feb 02 11:50:26 crc kubenswrapper[4845]: I0202 11:50:26.394407 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsjdb" event={"ID":"75a56863-1da1-4828-8e74-01d392b7c313","Type":"ContainerStarted","Data":"2dc8add61b9b40ac8cd1bce94cbf07534556e8498aa890fa3600d1e8cf918dee"} Feb 02 11:50:28 crc kubenswrapper[4845]: I0202 11:50:28.417328 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsjdb" event={"ID":"75a56863-1da1-4828-8e74-01d392b7c313","Type":"ContainerStarted","Data":"7f1d76ee71825e87e9057fb9875f9f91dc85c07d5e380783111cfb85e63b2628"} Feb 02 11:50:31 crc kubenswrapper[4845]: I0202 11:50:31.448958 4845 generic.go:334] "Generic (PLEG): container finished" podID="75a56863-1da1-4828-8e74-01d392b7c313" containerID="7f1d76ee71825e87e9057fb9875f9f91dc85c07d5e380783111cfb85e63b2628" exitCode=0 Feb 02 11:50:31 crc kubenswrapper[4845]: I0202 11:50:31.449039 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsjdb" event={"ID":"75a56863-1da1-4828-8e74-01d392b7c313","Type":"ContainerDied","Data":"7f1d76ee71825e87e9057fb9875f9f91dc85c07d5e380783111cfb85e63b2628"} Feb 02 11:50:32 crc kubenswrapper[4845]: I0202 11:50:32.461675 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsjdb" event={"ID":"75a56863-1da1-4828-8e74-01d392b7c313","Type":"ContainerStarted","Data":"1efc4b956319dcbaebabb1dffd7873abe974a811b1f3b11b91cc34dac1d0755b"} Feb 02 11:50:32 crc kubenswrapper[4845]: I0202 11:50:32.484360 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hsjdb" podStartSLOduration=3.033318439 podStartE2EDuration="8.484338569s" podCreationTimestamp="2026-02-02 11:50:24 +0000 UTC" firstStartedPulling="2026-02-02 11:50:26.396768651 +0000 UTC m=+4707.488170101" lastFinishedPulling="2026-02-02 11:50:31.847788791 +0000 UTC m=+4712.939190231" observedRunningTime="2026-02-02 11:50:32.481750784 +0000 UTC m=+4713.573152234" watchObservedRunningTime="2026-02-02 11:50:32.484338569 +0000 UTC m=+4713.575740029" Feb 02 11:50:34 crc kubenswrapper[4845]: I0202 11:50:34.930036 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:34 crc kubenswrapper[4845]: I0202 11:50:34.930528 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:36 crc kubenswrapper[4845]: I0202 11:50:36.797461 4845 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hsjdb" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="registry-server" probeResult="failure" output=< Feb 02 11:50:36 crc kubenswrapper[4845]: timeout: failed to connect service ":50051" within 1s Feb 02 11:50:36 crc kubenswrapper[4845]: > Feb 02 11:50:44 crc kubenswrapper[4845]: I0202 11:50:44.976349 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:45 crc kubenswrapper[4845]: I0202 11:50:45.028091 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:45 crc kubenswrapper[4845]: I0202 11:50:45.214918 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hsjdb"] Feb 02 11:50:46 crc kubenswrapper[4845]: I0202 11:50:46.664987 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hsjdb" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="registry-server" containerID="cri-o://1efc4b956319dcbaebabb1dffd7873abe974a811b1f3b11b91cc34dac1d0755b" gracePeriod=2 Feb 02 11:50:47 crc kubenswrapper[4845]: I0202 11:50:47.677072 4845 generic.go:334] "Generic (PLEG): container finished" podID="75a56863-1da1-4828-8e74-01d392b7c313" containerID="1efc4b956319dcbaebabb1dffd7873abe974a811b1f3b11b91cc34dac1d0755b" exitCode=0 Feb 02 11:50:47 crc kubenswrapper[4845]: I0202 11:50:47.677144 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsjdb" event={"ID":"75a56863-1da1-4828-8e74-01d392b7c313","Type":"ContainerDied","Data":"1efc4b956319dcbaebabb1dffd7873abe974a811b1f3b11b91cc34dac1d0755b"} Feb 02 11:50:47 crc kubenswrapper[4845]: I0202 11:50:47.967211 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.054333 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q9vs\" (UniqueName: \"kubernetes.io/projected/75a56863-1da1-4828-8e74-01d392b7c313-kube-api-access-8q9vs\") pod \"75a56863-1da1-4828-8e74-01d392b7c313\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.054531 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-catalog-content\") pod \"75a56863-1da1-4828-8e74-01d392b7c313\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.059546 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-utilities\") pod \"75a56863-1da1-4828-8e74-01d392b7c313\" (UID: \"75a56863-1da1-4828-8e74-01d392b7c313\") " Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.061334 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a56863-1da1-4828-8e74-01d392b7c313-kube-api-access-8q9vs" (OuterVolumeSpecName: "kube-api-access-8q9vs") pod "75a56863-1da1-4828-8e74-01d392b7c313" (UID: "75a56863-1da1-4828-8e74-01d392b7c313"). InnerVolumeSpecName "kube-api-access-8q9vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.068910 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-utilities" (OuterVolumeSpecName: "utilities") pod "75a56863-1da1-4828-8e74-01d392b7c313" (UID: "75a56863-1da1-4828-8e74-01d392b7c313"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.163278 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q9vs\" (UniqueName: \"kubernetes.io/projected/75a56863-1da1-4828-8e74-01d392b7c313-kube-api-access-8q9vs\") on node \"crc\" DevicePath \"\"" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.163556 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.176526 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75a56863-1da1-4828-8e74-01d392b7c313" (UID: "75a56863-1da1-4828-8e74-01d392b7c313"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.266123 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a56863-1da1-4828-8e74-01d392b7c313-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.689177 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsjdb" event={"ID":"75a56863-1da1-4828-8e74-01d392b7c313","Type":"ContainerDied","Data":"2dc8add61b9b40ac8cd1bce94cbf07534556e8498aa890fa3600d1e8cf918dee"} Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.689248 4845 scope.go:117] "RemoveContainer" containerID="1efc4b956319dcbaebabb1dffd7873abe974a811b1f3b11b91cc34dac1d0755b" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.690243 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsjdb" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.711103 4845 scope.go:117] "RemoveContainer" containerID="7f1d76ee71825e87e9057fb9875f9f91dc85c07d5e380783111cfb85e63b2628" Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.733999 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hsjdb"] Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.746450 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hsjdb"] Feb 02 11:50:48 crc kubenswrapper[4845]: I0202 11:50:48.758602 4845 scope.go:117] "RemoveContainer" containerID="403a9328fa4fcfabf4647f93c511ea9eb8de34eced9a1a5d2bb052a8e9e9f78d" Feb 02 11:50:49 crc kubenswrapper[4845]: I0202 11:50:49.727222 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a56863-1da1-4828-8e74-01d392b7c313" path="/var/lib/kubelet/pods/75a56863-1da1-4828-8e74-01d392b7c313/volumes" Feb 02 11:51:16 crc kubenswrapper[4845]: I0202 11:51:16.237878 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:51:16 crc kubenswrapper[4845]: I0202 11:51:16.238517 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.707085 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vx6jc/must-gather-x6jwm"] Feb 02 11:51:34 crc kubenswrapper[4845]: E0202 11:51:34.708441 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="extract-content" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.708462 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="extract-content" Feb 02 11:51:34 crc kubenswrapper[4845]: E0202 11:51:34.708523 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="extract-utilities" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.708545 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="extract-utilities" Feb 02 11:51:34 crc kubenswrapper[4845]: E0202 11:51:34.708560 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="registry-server" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.708574 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="registry-server" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.708918 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a56863-1da1-4828-8e74-01d392b7c313" containerName="registry-server" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.710605 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.714366 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vx6jc"/"default-dockercfg-29z4n" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.714452 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vx6jc"/"openshift-service-ca.crt" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.714535 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vx6jc"/"kube-root-ca.crt" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.721744 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vx6jc/must-gather-x6jwm"] Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.907936 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqd9j\" (UniqueName: \"kubernetes.io/projected/a0d660a9-36eb-4eee-a756-27847f623aa6-kube-api-access-rqd9j\") pod \"must-gather-x6jwm\" (UID: \"a0d660a9-36eb-4eee-a756-27847f623aa6\") " pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:51:34 crc kubenswrapper[4845]: I0202 11:51:34.908641 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a0d660a9-36eb-4eee-a756-27847f623aa6-must-gather-output\") pod \"must-gather-x6jwm\" (UID: \"a0d660a9-36eb-4eee-a756-27847f623aa6\") " pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:51:35 crc kubenswrapper[4845]: I0202 11:51:35.011154 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqd9j\" (UniqueName: \"kubernetes.io/projected/a0d660a9-36eb-4eee-a756-27847f623aa6-kube-api-access-rqd9j\") pod \"must-gather-x6jwm\" (UID: \"a0d660a9-36eb-4eee-a756-27847f623aa6\") " pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:51:35 crc kubenswrapper[4845]: I0202 11:51:35.011289 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a0d660a9-36eb-4eee-a756-27847f623aa6-must-gather-output\") pod \"must-gather-x6jwm\" (UID: \"a0d660a9-36eb-4eee-a756-27847f623aa6\") " pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:51:35 crc kubenswrapper[4845]: I0202 11:51:35.011745 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a0d660a9-36eb-4eee-a756-27847f623aa6-must-gather-output\") pod \"must-gather-x6jwm\" (UID: \"a0d660a9-36eb-4eee-a756-27847f623aa6\") " pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:51:35 crc kubenswrapper[4845]: I0202 11:51:35.037580 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqd9j\" (UniqueName: \"kubernetes.io/projected/a0d660a9-36eb-4eee-a756-27847f623aa6-kube-api-access-rqd9j\") pod \"must-gather-x6jwm\" (UID: \"a0d660a9-36eb-4eee-a756-27847f623aa6\") " pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:51:35 crc kubenswrapper[4845]: I0202 11:51:35.048809 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:51:35 crc kubenswrapper[4845]: I0202 11:51:35.609702 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vx6jc/must-gather-x6jwm"] Feb 02 11:51:36 crc kubenswrapper[4845]: I0202 11:51:36.220610 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" event={"ID":"a0d660a9-36eb-4eee-a756-27847f623aa6","Type":"ContainerStarted","Data":"afdebcf3d7e787794c24615db360659e800f3af334354a0e2dfc836ec9e89774"} Feb 02 11:51:41 crc kubenswrapper[4845]: I0202 11:51:41.286513 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" event={"ID":"a0d660a9-36eb-4eee-a756-27847f623aa6","Type":"ContainerStarted","Data":"7aac41ff8df6e02efefc2ff06fc1a8b635ec6f7893e527494a9992604219ab7f"} Feb 02 11:51:41 crc kubenswrapper[4845]: I0202 11:51:41.287048 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" event={"ID":"a0d660a9-36eb-4eee-a756-27847f623aa6","Type":"ContainerStarted","Data":"70dfdbb3165ccda62cfeb30152b80bca1a6f55a7313abd9ca43a514f2c7bab8e"} Feb 02 11:51:41 crc kubenswrapper[4845]: I0202 11:51:41.311389 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" podStartSLOduration=2.59189704 podStartE2EDuration="7.31137213s" podCreationTimestamp="2026-02-02 11:51:34 +0000 UTC" firstStartedPulling="2026-02-02 11:51:35.608270496 +0000 UTC m=+4776.699671956" lastFinishedPulling="2026-02-02 11:51:40.327745596 +0000 UTC m=+4781.419147046" observedRunningTime="2026-02-02 11:51:41.307396275 +0000 UTC m=+4782.398797715" watchObservedRunningTime="2026-02-02 11:51:41.31137213 +0000 UTC m=+4782.402773600" Feb 02 11:51:46 crc kubenswrapper[4845]: I0202 11:51:46.238009 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:51:46 crc kubenswrapper[4845]: I0202 11:51:46.238528 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:51:46 crc kubenswrapper[4845]: I0202 11:51:46.989720 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vx6jc/crc-debug-ntmts"] Feb 02 11:51:46 crc kubenswrapper[4845]: I0202 11:51:46.991786 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:51:47 crc kubenswrapper[4845]: I0202 11:51:47.059479 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58d791d0-40ca-48a5-a872-d7fb41e99b11-host\") pod \"crc-debug-ntmts\" (UID: \"58d791d0-40ca-48a5-a872-d7fb41e99b11\") " pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:51:47 crc kubenswrapper[4845]: I0202 11:51:47.059763 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxq62\" (UniqueName: \"kubernetes.io/projected/58d791d0-40ca-48a5-a872-d7fb41e99b11-kube-api-access-rxq62\") pod \"crc-debug-ntmts\" (UID: \"58d791d0-40ca-48a5-a872-d7fb41e99b11\") " pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:51:47 crc kubenswrapper[4845]: I0202 11:51:47.161760 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxq62\" (UniqueName: \"kubernetes.io/projected/58d791d0-40ca-48a5-a872-d7fb41e99b11-kube-api-access-rxq62\") pod \"crc-debug-ntmts\" (UID: \"58d791d0-40ca-48a5-a872-d7fb41e99b11\") " pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:51:47 crc kubenswrapper[4845]: I0202 11:51:47.161857 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58d791d0-40ca-48a5-a872-d7fb41e99b11-host\") pod \"crc-debug-ntmts\" (UID: \"58d791d0-40ca-48a5-a872-d7fb41e99b11\") " pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:51:47 crc kubenswrapper[4845]: I0202 11:51:47.162113 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58d791d0-40ca-48a5-a872-d7fb41e99b11-host\") pod \"crc-debug-ntmts\" (UID: \"58d791d0-40ca-48a5-a872-d7fb41e99b11\") " pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:51:47 crc kubenswrapper[4845]: I0202 11:51:47.183580 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxq62\" (UniqueName: \"kubernetes.io/projected/58d791d0-40ca-48a5-a872-d7fb41e99b11-kube-api-access-rxq62\") pod \"crc-debug-ntmts\" (UID: \"58d791d0-40ca-48a5-a872-d7fb41e99b11\") " pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:51:47 crc kubenswrapper[4845]: I0202 11:51:47.314085 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:51:48 crc kubenswrapper[4845]: I0202 11:51:48.375073 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/crc-debug-ntmts" event={"ID":"58d791d0-40ca-48a5-a872-d7fb41e99b11","Type":"ContainerStarted","Data":"346318acde68befdc034a2a7202c4e0ff9451a25850de41e54cbeb1f9e61864a"} Feb 02 11:52:01 crc kubenswrapper[4845]: I0202 11:52:01.557645 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/crc-debug-ntmts" event={"ID":"58d791d0-40ca-48a5-a872-d7fb41e99b11","Type":"ContainerStarted","Data":"51245b0406ee3556e0c1eaed2454e4cfd8a1b0ad3f05009b2389be64b1574357"} Feb 02 11:52:01 crc kubenswrapper[4845]: I0202 11:52:01.573350 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vx6jc/crc-debug-ntmts" podStartSLOduration=2.11817017 podStartE2EDuration="15.573329804s" podCreationTimestamp="2026-02-02 11:51:46 +0000 UTC" firstStartedPulling="2026-02-02 11:51:47.363905088 +0000 UTC m=+4788.455306538" lastFinishedPulling="2026-02-02 11:52:00.819064722 +0000 UTC m=+4801.910466172" observedRunningTime="2026-02-02 11:52:01.570197384 +0000 UTC m=+4802.661598834" watchObservedRunningTime="2026-02-02 11:52:01.573329804 +0000 UTC m=+4802.664731254" Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.237709 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.238369 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.238426 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.239405 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4667d56dc23295df8831dac2a0e953a5f6c9c92d45b3e4effcfbe305f29defc5"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.239460 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://4667d56dc23295df8831dac2a0e953a5f6c9c92d45b3e4effcfbe305f29defc5" gracePeriod=600 Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.746984 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="4667d56dc23295df8831dac2a0e953a5f6c9c92d45b3e4effcfbe305f29defc5" exitCode=0 Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.747162 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"4667d56dc23295df8831dac2a0e953a5f6c9c92d45b3e4effcfbe305f29defc5"} Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.747541 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75"} Feb 02 11:52:16 crc kubenswrapper[4845]: I0202 11:52:16.747565 4845 scope.go:117] "RemoveContainer" containerID="a48dc4e1fa51321313da4366c947dc0243cb3e2da37c626a8f78a36acef3e12e" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.528003 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4jj26"] Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.531407 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.549551 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jj26"] Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.618087 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-catalog-content\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.618146 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m9l2\" (UniqueName: \"kubernetes.io/projected/7d906e33-6090-4cdd-ab5a-749672c65f48-kube-api-access-2m9l2\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.618186 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-utilities\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.720037 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-catalog-content\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.720088 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m9l2\" (UniqueName: \"kubernetes.io/projected/7d906e33-6090-4cdd-ab5a-749672c65f48-kube-api-access-2m9l2\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.720127 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-utilities\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.720652 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-utilities\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.720666 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-catalog-content\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.749118 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m9l2\" (UniqueName: \"kubernetes.io/projected/7d906e33-6090-4cdd-ab5a-749672c65f48-kube-api-access-2m9l2\") pod \"certified-operators-4jj26\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:20 crc kubenswrapper[4845]: I0202 11:52:20.906859 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:21 crc kubenswrapper[4845]: I0202 11:52:21.534871 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jj26"] Feb 02 11:52:22 crc kubenswrapper[4845]: I0202 11:52:22.839281 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jj26" event={"ID":"7d906e33-6090-4cdd-ab5a-749672c65f48","Type":"ContainerStarted","Data":"80cedd756d836444310b5362b225836cbe7ca0ed2cdf9e25ed8edf3b6702df56"} Feb 02 11:52:23 crc kubenswrapper[4845]: I0202 11:52:23.855369 4845 generic.go:334] "Generic (PLEG): container finished" podID="58d791d0-40ca-48a5-a872-d7fb41e99b11" containerID="51245b0406ee3556e0c1eaed2454e4cfd8a1b0ad3f05009b2389be64b1574357" exitCode=0 Feb 02 11:52:23 crc kubenswrapper[4845]: I0202 11:52:23.855537 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/crc-debug-ntmts" event={"ID":"58d791d0-40ca-48a5-a872-d7fb41e99b11","Type":"ContainerDied","Data":"51245b0406ee3556e0c1eaed2454e4cfd8a1b0ad3f05009b2389be64b1574357"} Feb 02 11:52:23 crc kubenswrapper[4845]: I0202 11:52:23.862806 4845 generic.go:334] "Generic (PLEG): container finished" podID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerID="e7521cf920435e8007c22a332efc7e54769be12a28cad279f83023f1bad3f207" exitCode=0 Feb 02 11:52:23 crc kubenswrapper[4845]: I0202 11:52:23.862852 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jj26" event={"ID":"7d906e33-6090-4cdd-ab5a-749672c65f48","Type":"ContainerDied","Data":"e7521cf920435e8007c22a332efc7e54769be12a28cad279f83023f1bad3f207"} Feb 02 11:52:24 crc kubenswrapper[4845]: I0202 11:52:24.876259 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jj26" event={"ID":"7d906e33-6090-4cdd-ab5a-749672c65f48","Type":"ContainerStarted","Data":"c652225f3a80666b9b4aaca287f4bb0c217edfe4eb70b50541e6bca686568ca1"} Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.034627 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.066431 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vx6jc/crc-debug-ntmts"] Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.080879 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vx6jc/crc-debug-ntmts"] Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.134535 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58d791d0-40ca-48a5-a872-d7fb41e99b11-host\") pod \"58d791d0-40ca-48a5-a872-d7fb41e99b11\" (UID: \"58d791d0-40ca-48a5-a872-d7fb41e99b11\") " Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.134638 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58d791d0-40ca-48a5-a872-d7fb41e99b11-host" (OuterVolumeSpecName: "host") pod "58d791d0-40ca-48a5-a872-d7fb41e99b11" (UID: "58d791d0-40ca-48a5-a872-d7fb41e99b11"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.134699 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxq62\" (UniqueName: \"kubernetes.io/projected/58d791d0-40ca-48a5-a872-d7fb41e99b11-kube-api-access-rxq62\") pod \"58d791d0-40ca-48a5-a872-d7fb41e99b11\" (UID: \"58d791d0-40ca-48a5-a872-d7fb41e99b11\") " Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.135711 4845 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58d791d0-40ca-48a5-a872-d7fb41e99b11-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.144842 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d791d0-40ca-48a5-a872-d7fb41e99b11-kube-api-access-rxq62" (OuterVolumeSpecName: "kube-api-access-rxq62") pod "58d791d0-40ca-48a5-a872-d7fb41e99b11" (UID: "58d791d0-40ca-48a5-a872-d7fb41e99b11"). InnerVolumeSpecName "kube-api-access-rxq62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.237904 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxq62\" (UniqueName: \"kubernetes.io/projected/58d791d0-40ca-48a5-a872-d7fb41e99b11-kube-api-access-rxq62\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.727608 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d791d0-40ca-48a5-a872-d7fb41e99b11" path="/var/lib/kubelet/pods/58d791d0-40ca-48a5-a872-d7fb41e99b11/volumes" Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.887528 4845 generic.go:334] "Generic (PLEG): container finished" podID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerID="c652225f3a80666b9b4aaca287f4bb0c217edfe4eb70b50541e6bca686568ca1" exitCode=0 Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.887614 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jj26" event={"ID":"7d906e33-6090-4cdd-ab5a-749672c65f48","Type":"ContainerDied","Data":"c652225f3a80666b9b4aaca287f4bb0c217edfe4eb70b50541e6bca686568ca1"} Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.888777 4845 scope.go:117] "RemoveContainer" containerID="51245b0406ee3556e0c1eaed2454e4cfd8a1b0ad3f05009b2389be64b1574357" Feb 02 11:52:25 crc kubenswrapper[4845]: I0202 11:52:25.888993 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/crc-debug-ntmts" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.359525 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vx6jc/crc-debug-t28c5"] Feb 02 11:52:26 crc kubenswrapper[4845]: E0202 11:52:26.360206 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d791d0-40ca-48a5-a872-d7fb41e99b11" containerName="container-00" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.360236 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d791d0-40ca-48a5-a872-d7fb41e99b11" containerName="container-00" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.360455 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d791d0-40ca-48a5-a872-d7fb41e99b11" containerName="container-00" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.361256 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.464973 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2nt\" (UniqueName: \"kubernetes.io/projected/c729ce2c-d59b-4e91-84bb-21d018f9f204-kube-api-access-kr2nt\") pod \"crc-debug-t28c5\" (UID: \"c729ce2c-d59b-4e91-84bb-21d018f9f204\") " pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.465157 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c729ce2c-d59b-4e91-84bb-21d018f9f204-host\") pod \"crc-debug-t28c5\" (UID: \"c729ce2c-d59b-4e91-84bb-21d018f9f204\") " pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.567543 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c729ce2c-d59b-4e91-84bb-21d018f9f204-host\") pod \"crc-debug-t28c5\" (UID: \"c729ce2c-d59b-4e91-84bb-21d018f9f204\") " pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.567668 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c729ce2c-d59b-4e91-84bb-21d018f9f204-host\") pod \"crc-debug-t28c5\" (UID: \"c729ce2c-d59b-4e91-84bb-21d018f9f204\") " pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.568294 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr2nt\" (UniqueName: \"kubernetes.io/projected/c729ce2c-d59b-4e91-84bb-21d018f9f204-kube-api-access-kr2nt\") pod \"crc-debug-t28c5\" (UID: \"c729ce2c-d59b-4e91-84bb-21d018f9f204\") " pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.591768 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr2nt\" (UniqueName: \"kubernetes.io/projected/c729ce2c-d59b-4e91-84bb-21d018f9f204-kube-api-access-kr2nt\") pod \"crc-debug-t28c5\" (UID: \"c729ce2c-d59b-4e91-84bb-21d018f9f204\") " pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.753244 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.911549 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/crc-debug-t28c5" event={"ID":"c729ce2c-d59b-4e91-84bb-21d018f9f204","Type":"ContainerStarted","Data":"d3a558afe7937dea4c6a2e2ad0ab37289f3b2cedefe41c3b6e12d13b7bd78120"} Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.914066 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jj26" event={"ID":"7d906e33-6090-4cdd-ab5a-749672c65f48","Type":"ContainerStarted","Data":"ce8812117c5ad002cc231a9a155cf788764ec6017b333803334ede26175d300a"} Feb 02 11:52:26 crc kubenswrapper[4845]: I0202 11:52:26.974389 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4jj26" podStartSLOduration=4.5353691000000005 podStartE2EDuration="6.974361578s" podCreationTimestamp="2026-02-02 11:52:20 +0000 UTC" firstStartedPulling="2026-02-02 11:52:23.865879366 +0000 UTC m=+4824.957280816" lastFinishedPulling="2026-02-02 11:52:26.304871844 +0000 UTC m=+4827.396273294" observedRunningTime="2026-02-02 11:52:26.946976791 +0000 UTC m=+4828.038378261" watchObservedRunningTime="2026-02-02 11:52:26.974361578 +0000 UTC m=+4828.065763048" Feb 02 11:52:27 crc kubenswrapper[4845]: I0202 11:52:27.926400 4845 generic.go:334] "Generic (PLEG): container finished" podID="c729ce2c-d59b-4e91-84bb-21d018f9f204" containerID="9152f3fa80117e002d37c2d1ef281af4c758c28ae16bc74897b372ff63e0c244" exitCode=1 Feb 02 11:52:27 crc kubenswrapper[4845]: I0202 11:52:27.926469 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/crc-debug-t28c5" event={"ID":"c729ce2c-d59b-4e91-84bb-21d018f9f204","Type":"ContainerDied","Data":"9152f3fa80117e002d37c2d1ef281af4c758c28ae16bc74897b372ff63e0c244"} Feb 02 11:52:27 crc kubenswrapper[4845]: I0202 11:52:27.983375 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vx6jc/crc-debug-t28c5"] Feb 02 11:52:27 crc kubenswrapper[4845]: I0202 11:52:27.997565 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vx6jc/crc-debug-t28c5"] Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.068497 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.229420 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr2nt\" (UniqueName: \"kubernetes.io/projected/c729ce2c-d59b-4e91-84bb-21d018f9f204-kube-api-access-kr2nt\") pod \"c729ce2c-d59b-4e91-84bb-21d018f9f204\" (UID: \"c729ce2c-d59b-4e91-84bb-21d018f9f204\") " Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.229522 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c729ce2c-d59b-4e91-84bb-21d018f9f204-host\") pod \"c729ce2c-d59b-4e91-84bb-21d018f9f204\" (UID: \"c729ce2c-d59b-4e91-84bb-21d018f9f204\") " Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.230082 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c729ce2c-d59b-4e91-84bb-21d018f9f204-host" (OuterVolumeSpecName: "host") pod "c729ce2c-d59b-4e91-84bb-21d018f9f204" (UID: "c729ce2c-d59b-4e91-84bb-21d018f9f204"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.230393 4845 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c729ce2c-d59b-4e91-84bb-21d018f9f204-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.740973 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c729ce2c-d59b-4e91-84bb-21d018f9f204-kube-api-access-kr2nt" (OuterVolumeSpecName: "kube-api-access-kr2nt") pod "c729ce2c-d59b-4e91-84bb-21d018f9f204" (UID: "c729ce2c-d59b-4e91-84bb-21d018f9f204"). InnerVolumeSpecName "kube-api-access-kr2nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.844828 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr2nt\" (UniqueName: \"kubernetes.io/projected/c729ce2c-d59b-4e91-84bb-21d018f9f204-kube-api-access-kr2nt\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.946388 4845 scope.go:117] "RemoveContainer" containerID="9152f3fa80117e002d37c2d1ef281af4c758c28ae16bc74897b372ff63e0c244" Feb 02 11:52:29 crc kubenswrapper[4845]: I0202 11:52:29.946434 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/crc-debug-t28c5" Feb 02 11:52:30 crc kubenswrapper[4845]: I0202 11:52:30.907214 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:30 crc kubenswrapper[4845]: I0202 11:52:30.907298 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:30 crc kubenswrapper[4845]: I0202 11:52:30.969640 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:31 crc kubenswrapper[4845]: I0202 11:52:31.729442 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c729ce2c-d59b-4e91-84bb-21d018f9f204" path="/var/lib/kubelet/pods/c729ce2c-d59b-4e91-84bb-21d018f9f204/volumes" Feb 02 11:52:34 crc kubenswrapper[4845]: I0202 11:52:34.920638 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-clx9d"] Feb 02 11:52:34 crc kubenswrapper[4845]: E0202 11:52:34.921775 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c729ce2c-d59b-4e91-84bb-21d018f9f204" containerName="container-00" Feb 02 11:52:34 crc kubenswrapper[4845]: I0202 11:52:34.921796 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c729ce2c-d59b-4e91-84bb-21d018f9f204" containerName="container-00" Feb 02 11:52:34 crc kubenswrapper[4845]: I0202 11:52:34.922070 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c729ce2c-d59b-4e91-84bb-21d018f9f204" containerName="container-00" Feb 02 11:52:34 crc kubenswrapper[4845]: I0202 11:52:34.923714 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:34 crc kubenswrapper[4845]: I0202 11:52:34.950629 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-clx9d"] Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.076151 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfg77\" (UniqueName: \"kubernetes.io/projected/359cd7da-7e78-430b-90df-0924f27608c2-kube-api-access-gfg77\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.076208 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-utilities\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.076325 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-catalog-content\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.179063 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfg77\" (UniqueName: \"kubernetes.io/projected/359cd7da-7e78-430b-90df-0924f27608c2-kube-api-access-gfg77\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.179127 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-utilities\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.179235 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-catalog-content\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.179755 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-utilities\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.179770 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-catalog-content\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.200820 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfg77\" (UniqueName: \"kubernetes.io/projected/359cd7da-7e78-430b-90df-0924f27608c2-kube-api-access-gfg77\") pod \"community-operators-clx9d\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.265477 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:35 crc kubenswrapper[4845]: I0202 11:52:35.892573 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-clx9d"] Feb 02 11:52:35 crc kubenswrapper[4845]: W0202 11:52:35.896383 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359cd7da_7e78_430b_90df_0924f27608c2.slice/crio-15d4c362a1d3553b04d288f58f98c0d8865490035e9beea360d9b69141ab0270 WatchSource:0}: Error finding container 15d4c362a1d3553b04d288f58f98c0d8865490035e9beea360d9b69141ab0270: Status 404 returned error can't find the container with id 15d4c362a1d3553b04d288f58f98c0d8865490035e9beea360d9b69141ab0270 Feb 02 11:52:36 crc kubenswrapper[4845]: I0202 11:52:36.014532 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clx9d" event={"ID":"359cd7da-7e78-430b-90df-0924f27608c2","Type":"ContainerStarted","Data":"15d4c362a1d3553b04d288f58f98c0d8865490035e9beea360d9b69141ab0270"} Feb 02 11:52:37 crc kubenswrapper[4845]: I0202 11:52:37.026318 4845 generic.go:334] "Generic (PLEG): container finished" podID="359cd7da-7e78-430b-90df-0924f27608c2" containerID="91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91" exitCode=0 Feb 02 11:52:37 crc kubenswrapper[4845]: I0202 11:52:37.026512 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clx9d" event={"ID":"359cd7da-7e78-430b-90df-0924f27608c2","Type":"ContainerDied","Data":"91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91"} Feb 02 11:52:38 crc kubenswrapper[4845]: I0202 11:52:38.038187 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clx9d" event={"ID":"359cd7da-7e78-430b-90df-0924f27608c2","Type":"ContainerStarted","Data":"3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef"} Feb 02 11:52:39 crc kubenswrapper[4845]: I0202 11:52:39.052108 4845 generic.go:334] "Generic (PLEG): container finished" podID="359cd7da-7e78-430b-90df-0924f27608c2" containerID="3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef" exitCode=0 Feb 02 11:52:39 crc kubenswrapper[4845]: I0202 11:52:39.052301 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clx9d" event={"ID":"359cd7da-7e78-430b-90df-0924f27608c2","Type":"ContainerDied","Data":"3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef"} Feb 02 11:52:40 crc kubenswrapper[4845]: I0202 11:52:40.073101 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clx9d" event={"ID":"359cd7da-7e78-430b-90df-0924f27608c2","Type":"ContainerStarted","Data":"d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb"} Feb 02 11:52:40 crc kubenswrapper[4845]: I0202 11:52:40.100898 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-clx9d" podStartSLOduration=3.6587866780000002 podStartE2EDuration="6.100871655s" podCreationTimestamp="2026-02-02 11:52:34 +0000 UTC" firstStartedPulling="2026-02-02 11:52:37.029452208 +0000 UTC m=+4838.120853658" lastFinishedPulling="2026-02-02 11:52:39.471537185 +0000 UTC m=+4840.562938635" observedRunningTime="2026-02-02 11:52:40.089843198 +0000 UTC m=+4841.181244648" watchObservedRunningTime="2026-02-02 11:52:40.100871655 +0000 UTC m=+4841.192273125" Feb 02 11:52:41 crc kubenswrapper[4845]: I0202 11:52:41.381120 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:42 crc kubenswrapper[4845]: I0202 11:52:42.296317 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jj26"] Feb 02 11:52:42 crc kubenswrapper[4845]: I0202 11:52:42.296861 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4jj26" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerName="registry-server" containerID="cri-o://ce8812117c5ad002cc231a9a155cf788764ec6017b333803334ede26175d300a" gracePeriod=2 Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.104683 4845 generic.go:334] "Generic (PLEG): container finished" podID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerID="ce8812117c5ad002cc231a9a155cf788764ec6017b333803334ede26175d300a" exitCode=0 Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.104879 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jj26" event={"ID":"7d906e33-6090-4cdd-ab5a-749672c65f48","Type":"ContainerDied","Data":"ce8812117c5ad002cc231a9a155cf788764ec6017b333803334ede26175d300a"} Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.578210 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.711427 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-catalog-content\") pod \"7d906e33-6090-4cdd-ab5a-749672c65f48\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.711645 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-utilities\") pod \"7d906e33-6090-4cdd-ab5a-749672c65f48\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.711752 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m9l2\" (UniqueName: \"kubernetes.io/projected/7d906e33-6090-4cdd-ab5a-749672c65f48-kube-api-access-2m9l2\") pod \"7d906e33-6090-4cdd-ab5a-749672c65f48\" (UID: \"7d906e33-6090-4cdd-ab5a-749672c65f48\") " Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.714180 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-utilities" (OuterVolumeSpecName: "utilities") pod "7d906e33-6090-4cdd-ab5a-749672c65f48" (UID: "7d906e33-6090-4cdd-ab5a-749672c65f48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.720635 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d906e33-6090-4cdd-ab5a-749672c65f48-kube-api-access-2m9l2" (OuterVolumeSpecName: "kube-api-access-2m9l2") pod "7d906e33-6090-4cdd-ab5a-749672c65f48" (UID: "7d906e33-6090-4cdd-ab5a-749672c65f48"). InnerVolumeSpecName "kube-api-access-2m9l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.774053 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d906e33-6090-4cdd-ab5a-749672c65f48" (UID: "7d906e33-6090-4cdd-ab5a-749672c65f48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.817521 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.817783 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m9l2\" (UniqueName: \"kubernetes.io/projected/7d906e33-6090-4cdd-ab5a-749672c65f48-kube-api-access-2m9l2\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:43 crc kubenswrapper[4845]: I0202 11:52:43.817798 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d906e33-6090-4cdd-ab5a-749672c65f48-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:44 crc kubenswrapper[4845]: I0202 11:52:44.117869 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jj26" event={"ID":"7d906e33-6090-4cdd-ab5a-749672c65f48","Type":"ContainerDied","Data":"80cedd756d836444310b5362b225836cbe7ca0ed2cdf9e25ed8edf3b6702df56"} Feb 02 11:52:44 crc kubenswrapper[4845]: I0202 11:52:44.117946 4845 scope.go:117] "RemoveContainer" containerID="ce8812117c5ad002cc231a9a155cf788764ec6017b333803334ede26175d300a" Feb 02 11:52:44 crc kubenswrapper[4845]: I0202 11:52:44.118131 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jj26" Feb 02 11:52:44 crc kubenswrapper[4845]: I0202 11:52:44.145983 4845 scope.go:117] "RemoveContainer" containerID="c652225f3a80666b9b4aaca287f4bb0c217edfe4eb70b50541e6bca686568ca1" Feb 02 11:52:44 crc kubenswrapper[4845]: I0202 11:52:44.169861 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jj26"] Feb 02 11:52:44 crc kubenswrapper[4845]: I0202 11:52:44.181537 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4jj26"] Feb 02 11:52:44 crc kubenswrapper[4845]: I0202 11:52:44.183629 4845 scope.go:117] "RemoveContainer" containerID="e7521cf920435e8007c22a332efc7e54769be12a28cad279f83023f1bad3f207" Feb 02 11:52:45 crc kubenswrapper[4845]: I0202 11:52:45.266536 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:45 crc kubenswrapper[4845]: I0202 11:52:45.266580 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:45 crc kubenswrapper[4845]: I0202 11:52:45.315176 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:45 crc kubenswrapper[4845]: I0202 11:52:45.725552 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" path="/var/lib/kubelet/pods/7d906e33-6090-4cdd-ab5a-749672c65f48/volumes" Feb 02 11:52:46 crc kubenswrapper[4845]: I0202 11:52:46.190245 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:46 crc kubenswrapper[4845]: I0202 11:52:46.701597 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-clx9d"] Feb 02 11:52:48 crc kubenswrapper[4845]: I0202 11:52:48.217252 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-clx9d" podUID="359cd7da-7e78-430b-90df-0924f27608c2" containerName="registry-server" containerID="cri-o://d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb" gracePeriod=2 Feb 02 11:52:48 crc kubenswrapper[4845]: E0202 11:52:48.340547 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359cd7da_7e78_430b_90df_0924f27608c2.slice/crio-conmon-d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359cd7da_7e78_430b_90df_0924f27608c2.slice/crio-d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:52:48 crc kubenswrapper[4845]: I0202 11:52:48.922595 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.073498 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfg77\" (UniqueName: \"kubernetes.io/projected/359cd7da-7e78-430b-90df-0924f27608c2-kube-api-access-gfg77\") pod \"359cd7da-7e78-430b-90df-0924f27608c2\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.073644 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-catalog-content\") pod \"359cd7da-7e78-430b-90df-0924f27608c2\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.073708 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-utilities\") pod \"359cd7da-7e78-430b-90df-0924f27608c2\" (UID: \"359cd7da-7e78-430b-90df-0924f27608c2\") " Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.075455 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-utilities" (OuterVolumeSpecName: "utilities") pod "359cd7da-7e78-430b-90df-0924f27608c2" (UID: "359cd7da-7e78-430b-90df-0924f27608c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.081205 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359cd7da-7e78-430b-90df-0924f27608c2-kube-api-access-gfg77" (OuterVolumeSpecName: "kube-api-access-gfg77") pod "359cd7da-7e78-430b-90df-0924f27608c2" (UID: "359cd7da-7e78-430b-90df-0924f27608c2"). InnerVolumeSpecName "kube-api-access-gfg77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.161784 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "359cd7da-7e78-430b-90df-0924f27608c2" (UID: "359cd7da-7e78-430b-90df-0924f27608c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.177115 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfg77\" (UniqueName: \"kubernetes.io/projected/359cd7da-7e78-430b-90df-0924f27608c2-kube-api-access-gfg77\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.177166 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.177186 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cd7da-7e78-430b-90df-0924f27608c2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.239282 4845 generic.go:334] "Generic (PLEG): container finished" podID="359cd7da-7e78-430b-90df-0924f27608c2" containerID="d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb" exitCode=0 Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.239335 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clx9d" event={"ID":"359cd7da-7e78-430b-90df-0924f27608c2","Type":"ContainerDied","Data":"d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb"} Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.239363 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clx9d" event={"ID":"359cd7da-7e78-430b-90df-0924f27608c2","Type":"ContainerDied","Data":"15d4c362a1d3553b04d288f58f98c0d8865490035e9beea360d9b69141ab0270"} Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.239380 4845 scope.go:117] "RemoveContainer" containerID="d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.239558 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clx9d" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.276007 4845 scope.go:117] "RemoveContainer" containerID="3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.285289 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-clx9d"] Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.298107 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-clx9d"] Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.299182 4845 scope.go:117] "RemoveContainer" containerID="91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.352064 4845 scope.go:117] "RemoveContainer" containerID="d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb" Feb 02 11:52:49 crc kubenswrapper[4845]: E0202 11:52:49.352514 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb\": container with ID starting with d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb not found: ID does not exist" containerID="d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.352555 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb"} err="failed to get container status \"d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb\": rpc error: code = NotFound desc = could not find container \"d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb\": container with ID starting with d7f120c8c90e9d47f3e57ed9e4798938e3d985f7e4c6b455db95590c9b0069fb not found: ID does not exist" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.352584 4845 scope.go:117] "RemoveContainer" containerID="3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef" Feb 02 11:52:49 crc kubenswrapper[4845]: E0202 11:52:49.353024 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef\": container with ID starting with 3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef not found: ID does not exist" containerID="3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.353052 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef"} err="failed to get container status \"3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef\": rpc error: code = NotFound desc = could not find container \"3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef\": container with ID starting with 3dddecb4998bcdcbc6205bad5e6fa6f2c13fb363dfb2c7bf32befaa7521e96ef not found: ID does not exist" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.353068 4845 scope.go:117] "RemoveContainer" containerID="91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91" Feb 02 11:52:49 crc kubenswrapper[4845]: E0202 11:52:49.353343 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91\": container with ID starting with 91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91 not found: ID does not exist" containerID="91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.353370 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91"} err="failed to get container status \"91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91\": rpc error: code = NotFound desc = could not find container \"91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91\": container with ID starting with 91cfa1bca0e33f7dbb42899c32bcadfac501f3ebe292d4c890d2cd403f521e91 not found: ID does not exist" Feb 02 11:52:49 crc kubenswrapper[4845]: I0202 11:52:49.726461 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359cd7da-7e78-430b-90df-0924f27608c2" path="/var/lib/kubelet/pods/359cd7da-7e78-430b-90df-0924f27608c2/volumes" Feb 02 11:53:35 crc kubenswrapper[4845]: I0202 11:53:35.634279 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8448c87f86-gdg49_4d926fea-dae3-4818-a608-4d9fa52abef5/barbican-api/0.log" Feb 02 11:53:35 crc kubenswrapper[4845]: I0202 11:53:35.967866 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8448c87f86-gdg49_4d926fea-dae3-4818-a608-4d9fa52abef5/barbican-api-log/0.log" Feb 02 11:53:35 crc kubenswrapper[4845]: I0202 11:53:35.967958 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-555888887b-mbz72_6dea749b-261a-4af3-979a-127dca4af07c/barbican-keystone-listener-log/0.log" Feb 02 11:53:35 crc kubenswrapper[4845]: I0202 11:53:35.978854 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-555888887b-mbz72_6dea749b-261a-4af3-979a-127dca4af07c/barbican-keystone-listener/0.log" Feb 02 11:53:36 crc kubenswrapper[4845]: I0202 11:53:36.198804 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-954bfc4f9-dfghw_a9ec709e-f840-4ba0-b631-77038f9c5551/barbican-worker/0.log" Feb 02 11:53:36 crc kubenswrapper[4845]: I0202 11:53:36.213963 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-954bfc4f9-dfghw_a9ec709e-f840-4ba0-b631-77038f9c5551/barbican-worker-log/0.log" Feb 02 11:53:36 crc kubenswrapper[4845]: I0202 11:53:36.424853 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_813ec32b-5cd3-491d-85ac-bcf0140d0a8f/ceilometer-central-agent/0.log" Feb 02 11:53:36 crc kubenswrapper[4845]: I0202 11:53:36.455178 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_813ec32b-5cd3-491d-85ac-bcf0140d0a8f/proxy-httpd/0.log" Feb 02 11:53:36 crc kubenswrapper[4845]: I0202 11:53:36.490859 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_813ec32b-5cd3-491d-85ac-bcf0140d0a8f/sg-core/0.log" Feb 02 11:53:36 crc kubenswrapper[4845]: I0202 11:53:36.495370 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_813ec32b-5cd3-491d-85ac-bcf0140d0a8f/ceilometer-notification-agent/0.log" Feb 02 11:53:36 crc kubenswrapper[4845]: I0202 11:53:36.757499 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1800fe94-c9b9-4a5a-963a-75d82a4eab94/cinder-api/0.log" Feb 02 11:53:36 crc kubenswrapper[4845]: I0202 11:53:36.794078 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1800fe94-c9b9-4a5a-963a-75d82a4eab94/cinder-api-log/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.000575 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3483568c-cbaa-4f63-94e5-36d1a9534d31/cinder-scheduler/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.081306 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3483568c-cbaa-4f63-94e5-36d1a9534d31/probe/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.109961 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-785dh_330a4322-2c1c-4f9a-9093-bfae422cc1fb/init/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.322348 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-785dh_330a4322-2c1c-4f9a-9093-bfae422cc1fb/dnsmasq-dns/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.475417 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-785dh_330a4322-2c1c-4f9a-9093-bfae422cc1fb/init/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.630680 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_80eee60d-7cee-4b29-b022-9f5e8e5d6bdb/glance-httpd/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.788970 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_80eee60d-7cee-4b29-b022-9f5e8e5d6bdb/glance-log/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.868682 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fbdeff72-81f9-4063-8704-d97b21e01b82/glance-httpd/0.log" Feb 02 11:53:37 crc kubenswrapper[4845]: I0202 11:53:37.924575 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fbdeff72-81f9-4063-8704-d97b21e01b82/glance-log/0.log" Feb 02 11:53:38 crc kubenswrapper[4845]: I0202 11:53:38.606251 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-658dbb4bcd-qn5fs_6cfd78fb-8f69-43d4-9a58-f7e2f5d27958/heat-engine/0.log" Feb 02 11:53:38 crc kubenswrapper[4845]: I0202 11:53:38.663856 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7998b4fc87-n5g2f_7dfab927-78ef-4105-a07b-a109690fda89/heat-api/0.log" Feb 02 11:53:38 crc kubenswrapper[4845]: I0202 11:53:38.704163 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6dcccd9c6c-tq64l_1225250d-8a00-47d3-acea-856fa864dff5/heat-cfnapi/0.log" Feb 02 11:53:38 crc kubenswrapper[4845]: I0202 11:53:38.916295 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500501-znk7q_7303667b-89bb-4ad1-92a8-3c94525911d4/keystone-cron/0.log" Feb 02 11:53:38 crc kubenswrapper[4845]: I0202 11:53:38.963739 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c4f9db54b-5v9r8_61e42051-311d-4b4b-af17-e301351d9267/keystone-api/0.log" Feb 02 11:53:39 crc kubenswrapper[4845]: I0202 11:53:39.174384 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f70412f5-a824-45b2-92c2-8e37a25d540a/kube-state-metrics/0.log" Feb 02 11:53:39 crc kubenswrapper[4845]: I0202 11:53:39.484299 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_6704fdd3-f589-4ccd-9a52-4a914e219b09/mysqld-exporter/0.log" Feb 02 11:53:39 crc kubenswrapper[4845]: I0202 11:53:39.713032 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b8db7b6ff-lx6zl_7b4befc3-7f3f-4813-9c5e-9fac28d60f72/neutron-api/0.log" Feb 02 11:53:39 crc kubenswrapper[4845]: I0202 11:53:39.818095 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b8db7b6ff-lx6zl_7b4befc3-7f3f-4813-9c5e-9fac28d60f72/neutron-httpd/0.log" Feb 02 11:53:40 crc kubenswrapper[4845]: I0202 11:53:40.381502 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_953beda6-58f2-45c2-b34e-0cb7db2d3bf6/nova-api-log/0.log" Feb 02 11:53:40 crc kubenswrapper[4845]: I0202 11:53:40.733435 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_af5e3b4b-9a44-4b50-8799-71f869de9028/nova-cell0-conductor-conductor/0.log" Feb 02 11:53:40 crc kubenswrapper[4845]: I0202 11:53:40.740045 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_953beda6-58f2-45c2-b34e-0cb7db2d3bf6/nova-api-api/0.log" Feb 02 11:53:40 crc kubenswrapper[4845]: I0202 11:53:40.840914 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_039d1d72-0f72-4172-a037-ea289c8d7fbb/nova-cell1-conductor-conductor/0.log" Feb 02 11:53:41 crc kubenswrapper[4845]: I0202 11:53:41.311552 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_85bf6fdc-0816-4f80-966c-426f4906c581/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 11:53:42 crc kubenswrapper[4845]: I0202 11:53:42.037057 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_12adbd4d-efe1-4549-bcac-f2b5f14f18b9/nova-metadata-log/0.log" Feb 02 11:53:42 crc kubenswrapper[4845]: I0202 11:53:42.074952 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b3eed39b-ccd7-4c3d-bbd8-6872503e1c60/nova-scheduler-scheduler/0.log" Feb 02 11:53:42 crc kubenswrapper[4845]: I0202 11:53:42.274509 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25ccf740-cc48-4863-8a7d-98548588860f/mysql-bootstrap/0.log" Feb 02 11:53:42 crc kubenswrapper[4845]: I0202 11:53:42.580018 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25ccf740-cc48-4863-8a7d-98548588860f/mysql-bootstrap/0.log" Feb 02 11:53:42 crc kubenswrapper[4845]: I0202 11:53:42.613974 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25ccf740-cc48-4863-8a7d-98548588860f/galera/0.log" Feb 02 11:53:42 crc kubenswrapper[4845]: I0202 11:53:42.861098 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c7d4707-dfce-464f-bffe-0d543bea6299/mysql-bootstrap/0.log" Feb 02 11:53:43 crc kubenswrapper[4845]: I0202 11:53:43.042544 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c7d4707-dfce-464f-bffe-0d543bea6299/mysql-bootstrap/0.log" Feb 02 11:53:43 crc kubenswrapper[4845]: I0202 11:53:43.096784 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c7d4707-dfce-464f-bffe-0d543bea6299/galera/0.log" Feb 02 11:53:43 crc kubenswrapper[4845]: I0202 11:53:43.725163 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_12adbd4d-efe1-4549-bcac-f2b5f14f18b9/nova-metadata-metadata/0.log" Feb 02 11:53:43 crc kubenswrapper[4845]: I0202 11:53:43.861657 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c10a41f9-4bda-4d90-81c1-09ed21f00b2b/openstackclient/0.log" Feb 02 11:53:43 crc kubenswrapper[4845]: I0202 11:53:43.943715 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mqgrd_0f152c8c-6cc4-4586-9fcb-c1ddee6e81d2/openstack-network-exporter/0.log" Feb 02 11:53:44 crc kubenswrapper[4845]: I0202 11:53:44.165166 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9qwr2_4f430e6a-b6ca-42b5-bb37-e5104bba0bd1/ovsdb-server-init/0.log" Feb 02 11:53:44 crc kubenswrapper[4845]: I0202 11:53:44.416183 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9qwr2_4f430e6a-b6ca-42b5-bb37-e5104bba0bd1/ovsdb-server-init/0.log" Feb 02 11:53:44 crc kubenswrapper[4845]: I0202 11:53:44.417863 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9qwr2_4f430e6a-b6ca-42b5-bb37-e5104bba0bd1/ovs-vswitchd/0.log" Feb 02 11:53:44 crc kubenswrapper[4845]: I0202 11:53:44.457098 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9qwr2_4f430e6a-b6ca-42b5-bb37-e5104bba0bd1/ovsdb-server/0.log" Feb 02 11:53:44 crc kubenswrapper[4845]: I0202 11:53:44.633051 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-tt4db_72da7703-b176-47cb-953e-de037d663c55/ovn-controller/0.log" Feb 02 11:53:44 crc kubenswrapper[4845]: I0202 11:53:44.745821 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_53989098-3602-4958-96b3-ca7c539c29c9/openstack-network-exporter/0.log" Feb 02 11:53:44 crc kubenswrapper[4845]: I0202 11:53:44.752501 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_53989098-3602-4958-96b3-ca7c539c29c9/ovn-northd/0.log" Feb 02 11:53:44 crc kubenswrapper[4845]: I0202 11:53:44.950530 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bd4a7449-0e37-44e1-9f01-bb1a336cb8cd/openstack-network-exporter/0.log" Feb 02 11:53:45 crc kubenswrapper[4845]: I0202 11:53:45.243076 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bd4a7449-0e37-44e1-9f01-bb1a336cb8cd/ovsdbserver-nb/0.log" Feb 02 11:53:45 crc kubenswrapper[4845]: I0202 11:53:45.431828 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_67a51964-326b-42cd-8055-0822d42557f7/openstack-network-exporter/0.log" Feb 02 11:53:45 crc kubenswrapper[4845]: I0202 11:53:45.455069 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_67a51964-326b-42cd-8055-0822d42557f7/ovsdbserver-sb/0.log" Feb 02 11:53:45 crc kubenswrapper[4845]: I0202 11:53:45.649892 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68f64c64d8-r7nkx_5978920a-e63d-4cb3-accd-4353fb398d50/placement-api/0.log" Feb 02 11:53:45 crc kubenswrapper[4845]: I0202 11:53:45.880599 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68f64c64d8-r7nkx_5978920a-e63d-4cb3-accd-4353fb398d50/placement-log/0.log" Feb 02 11:53:45 crc kubenswrapper[4845]: I0202 11:53:45.937206 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_31859db3-3de0-46d0-a81b-b951f1d45279/init-config-reloader/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.062495 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_31859db3-3de0-46d0-a81b-b951f1d45279/config-reloader/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.151504 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_31859db3-3de0-46d0-a81b-b951f1d45279/init-config-reloader/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.164716 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_31859db3-3de0-46d0-a81b-b951f1d45279/thanos-sidecar/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.191956 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_31859db3-3de0-46d0-a81b-b951f1d45279/prometheus/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.415754 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_70739f91-4fde-4bc2-b4e1-5bdb7cb0426c/setup-container/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.642026 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_70739f91-4fde-4bc2-b4e1-5bdb7cb0426c/rabbitmq/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.708921 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e45ad6a-20f4-4da2-82b7-500ed29a0cd5/setup-container/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.753031 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_70739f91-4fde-4bc2-b4e1-5bdb7cb0426c/setup-container/0.log" Feb 02 11:53:46 crc kubenswrapper[4845]: I0202 11:53:46.987742 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e45ad6a-20f4-4da2-82b7-500ed29a0cd5/setup-container/0.log" Feb 02 11:53:47 crc kubenswrapper[4845]: I0202 11:53:47.070589 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e45ad6a-20f4-4da2-82b7-500ed29a0cd5/rabbitmq/0.log" Feb 02 11:53:47 crc kubenswrapper[4845]: I0202 11:53:47.105491 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_d0a3a285-364a-4df2-8a7c-947ff673f254/setup-container/0.log" Feb 02 11:53:47 crc kubenswrapper[4845]: I0202 11:53:47.399543 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_d0a3a285-364a-4df2-8a7c-947ff673f254/setup-container/0.log" Feb 02 11:53:47 crc kubenswrapper[4845]: I0202 11:53:47.443888 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_d0a3a285-364a-4df2-8a7c-947ff673f254/rabbitmq/0.log" Feb 02 11:53:47 crc kubenswrapper[4845]: I0202 11:53:47.492320 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_a61fa08e-868a-4415-88d5-7ed0eebbeb45/setup-container/0.log" Feb 02 11:53:47 crc kubenswrapper[4845]: I0202 11:53:47.801610 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_a61fa08e-868a-4415-88d5-7ed0eebbeb45/setup-container/0.log" Feb 02 11:53:47 crc kubenswrapper[4845]: I0202 11:53:47.852006 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_a61fa08e-868a-4415-88d5-7ed0eebbeb45/rabbitmq/0.log" Feb 02 11:53:48 crc kubenswrapper[4845]: I0202 11:53:48.108165 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-fwkp8_acbaf357-af6c-46b6-b6f0-de2b6e4ee44c/swift-ring-rebalance/0.log" Feb 02 11:53:48 crc kubenswrapper[4845]: I0202 11:53:48.110669 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67878d9fbc-npvwk_1d9f4b80-6273-4d77-9309-2ffecc5acc64/proxy-server/0.log" Feb 02 11:53:48 crc kubenswrapper[4845]: I0202 11:53:48.123003 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67878d9fbc-npvwk_1d9f4b80-6273-4d77-9309-2ffecc5acc64/proxy-httpd/0.log" Feb 02 11:53:48 crc kubenswrapper[4845]: I0202 11:53:48.403769 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/account-auditor/0.log" Feb 02 11:53:48 crc kubenswrapper[4845]: I0202 11:53:48.452160 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/account-reaper/0.log" Feb 02 11:53:48 crc kubenswrapper[4845]: I0202 11:53:48.480956 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/account-replicator/0.log" Feb 02 11:53:48 crc kubenswrapper[4845]: I0202 11:53:48.910516 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/account-server/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.031771 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/container-auditor/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.069693 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/container-server/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.111136 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/container-replicator/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.271525 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/container-updater/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.305086 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/object-expirer/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.533606 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/object-auditor/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.649545 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/object-replicator/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.706687 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/object-updater/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.731104 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/object-server/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.781960 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/rsync/0.log" Feb 02 11:53:49 crc kubenswrapper[4845]: I0202 11:53:49.983097 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d6db6e42-984a-484b-9f90-e6efa9817f37/swift-recon-cron/0.log" Feb 02 11:53:53 crc kubenswrapper[4845]: I0202 11:53:53.432617 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b34640b6-49ff-4638-bde8-1bc32e658907/memcached/0.log" Feb 02 11:54:16 crc kubenswrapper[4845]: I0202 11:54:16.237642 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:54:16 crc kubenswrapper[4845]: I0202 11:54:16.238283 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:54:20 crc kubenswrapper[4845]: I0202 11:54:20.264355 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf_08c56222-38b1-47b8-b554-cc59e503ecf0/util/0.log" Feb 02 11:54:20 crc kubenswrapper[4845]: I0202 11:54:20.573391 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf_08c56222-38b1-47b8-b554-cc59e503ecf0/util/0.log" Feb 02 11:54:20 crc kubenswrapper[4845]: I0202 11:54:20.604179 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf_08c56222-38b1-47b8-b554-cc59e503ecf0/pull/0.log" Feb 02 11:54:20 crc kubenswrapper[4845]: I0202 11:54:20.611039 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf_08c56222-38b1-47b8-b554-cc59e503ecf0/pull/0.log" Feb 02 11:54:20 crc kubenswrapper[4845]: I0202 11:54:20.809122 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf_08c56222-38b1-47b8-b554-cc59e503ecf0/extract/0.log" Feb 02 11:54:20 crc kubenswrapper[4845]: I0202 11:54:20.815202 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf_08c56222-38b1-47b8-b554-cc59e503ecf0/pull/0.log" Feb 02 11:54:20 crc kubenswrapper[4845]: I0202 11:54:20.825210 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a302fc681e2ed83f5d2c11a7c6c4aafa0a15617c72a70292b42fcaafcfz69tf_08c56222-38b1-47b8-b554-cc59e503ecf0/util/0.log" Feb 02 11:54:21 crc kubenswrapper[4845]: I0202 11:54:21.055798 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-cfvq7_202de28c-c44a-43d9-98fd-4b34b1dcc65f/manager/0.log" Feb 02 11:54:21 crc kubenswrapper[4845]: I0202 11:54:21.071286 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-c4jdf_d9196fe1-4a04-44c1-9a5f-1ad5de52da7f/manager/0.log" Feb 02 11:54:21 crc kubenswrapper[4845]: I0202 11:54:21.274293 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-qfprx_efa2be30-a7d0-4b26-865a-58448de203a0/manager/0.log" Feb 02 11:54:21 crc kubenswrapper[4845]: I0202 11:54:21.345714 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-9c2wv_85439e8a-f7d3-4e0b-827c-bf27e8cd53dd/manager/0.log" Feb 02 11:54:21 crc kubenswrapper[4845]: I0202 11:54:21.572546 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-pdfcx_745626d8-548b-43bb-aee8-eeab34a86427/manager/0.log" Feb 02 11:54:21 crc kubenswrapper[4845]: I0202 11:54:21.580512 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-msjcj_1b72ed0e-9df5-459f-8ca9-de19874a3018/manager/0.log" Feb 02 11:54:21 crc kubenswrapper[4845]: I0202 11:54:21.971325 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-bc2xw_36101b5e-a4ec-42b8-bb19-1cd2df2897c6/manager/0.log" Feb 02 11:54:21 crc kubenswrapper[4845]: I0202 11:54:21.971687 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-m55cc_f3c02aa0-5039-4a4f-ae11-1bac119f7e31/manager/0.log" Feb 02 11:54:22 crc kubenswrapper[4845]: I0202 11:54:22.152743 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-w55l4_7cc6d028-e9d2-459c-b34c-d069917832a4/manager/0.log" Feb 02 11:54:22 crc kubenswrapper[4845]: I0202 11:54:22.273270 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-plj9z_de6ca8d3-5e0c-4b7f-b7b5-ce0d9fa4856d/manager/0.log" Feb 02 11:54:22 crc kubenswrapper[4845]: I0202 11:54:22.412643 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-t89t9_568bf546-0674-4dbd-91d8-9497c682e368/manager/0.log" Feb 02 11:54:22 crc kubenswrapper[4845]: I0202 11:54:22.480530 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-hnt9f_a1c4a4d1-3974-47c1-9efc-ee88a38e13a5/manager/0.log" Feb 02 11:54:22 crc kubenswrapper[4845]: I0202 11:54:22.704938 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-p98cd_cac06f19-af65-481d-b739-68375e8d2968/manager/0.log" Feb 02 11:54:22 crc kubenswrapper[4845]: I0202 11:54:22.790632 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-s97rq_30843195-75a4-4b59-9193-dacda845ace7/manager/0.log" Feb 02 11:54:22 crc kubenswrapper[4845]: I0202 11:54:22.951328 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dx44lh_146aa38c-b63c-485a-9c55-006031cfcaa0/manager/0.log" Feb 02 11:54:23 crc kubenswrapper[4845]: I0202 11:54:23.114131 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5649bd689f-k5lt8_e693a9f1-6990-407e-9d01-a23428a6f602/operator/0.log" Feb 02 11:54:23 crc kubenswrapper[4845]: I0202 11:54:23.635244 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-csz6h_153335e1-79de-4c5c-a3cd-2731d0998994/registry-server/0.log" Feb 02 11:54:23 crc kubenswrapper[4845]: I0202 11:54:23.827263 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-lt2wj_eae9c104-9193-4404-b25a-3a47932ef374/manager/0.log" Feb 02 11:54:23 crc kubenswrapper[4845]: I0202 11:54:23.967883 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-9ltr5_70403789-9865-4c4d-a969-118a157e564e/manager/0.log" Feb 02 11:54:24 crc kubenswrapper[4845]: I0202 11:54:24.188821 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-z5f9l_39f98254-3b87-4ac2-be8c-7d7a0f29d6ce/operator/0.log" Feb 02 11:54:24 crc kubenswrapper[4845]: I0202 11:54:24.281277 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b7c7bb6c9-k6h5z_bd7f3a0c-1bdf-4673-b657-f56e7040f2a1/manager/0.log" Feb 02 11:54:24 crc kubenswrapper[4845]: I0202 11:54:24.416783 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-w7bxj_817413ef-6c47-47ec-8e08-8dffd27c1e11/manager/0.log" Feb 02 11:54:24 crc kubenswrapper[4845]: I0202 11:54:24.642224 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-5f4q4_09ccace8-b972-48ae-a15d-ecf88a300105/manager/0.log" Feb 02 11:54:24 crc kubenswrapper[4845]: I0202 11:54:24.773545 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6bbb97ddc6-fx4tn_1cec8fc8-2b7b-4332-92a4-05483486f925/manager/0.log" Feb 02 11:54:24 crc kubenswrapper[4845]: I0202 11:54:24.908448 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-mkrpp_5d4eb1a9-137a-4959-9d37-d81ee9c6dd54/manager/0.log" Feb 02 11:54:46 crc kubenswrapper[4845]: I0202 11:54:46.237654 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:54:46 crc kubenswrapper[4845]: I0202 11:54:46.238322 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:54:49 crc kubenswrapper[4845]: I0202 11:54:49.709355 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c46tw_722bda9f-5a8b-4c83-8b1f-790da0003ce9/control-plane-machine-set-operator/0.log" Feb 02 11:54:50 crc kubenswrapper[4845]: I0202 11:54:50.187297 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xk8gn_6bf70521-8fdf-400f-b7cd-d96b609b4783/kube-rbac-proxy/0.log" Feb 02 11:54:50 crc kubenswrapper[4845]: I0202 11:54:50.228370 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xk8gn_6bf70521-8fdf-400f-b7cd-d96b609b4783/machine-api-operator/0.log" Feb 02 11:55:03 crc kubenswrapper[4845]: I0202 11:55:03.940788 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7p596_c1996e72-3bd0-4770-9662-c0c1359d7a8b/cert-manager-controller/0.log" Feb 02 11:55:04 crc kubenswrapper[4845]: I0202 11:55:04.249120 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vqsd8_8b99109d-f1ff-4d24-b08a-c317fffd456c/cert-manager-cainjector/0.log" Feb 02 11:55:04 crc kubenswrapper[4845]: I0202 11:55:04.286036 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-ltwq9_7b6c985e-704e-4ff8-b668-d2f4cb218172/cert-manager-webhook/0.log" Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.238044 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.238596 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.238640 4845 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.239608 4845 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75"} pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.239664 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" containerID="cri-o://3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" gracePeriod=600 Feb 02 11:55:16 crc kubenswrapper[4845]: E0202 11:55:16.328608 4845 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf2f253_531f_4835_84c1_928680352f7f.slice/crio-conmon-3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:55:16 crc kubenswrapper[4845]: E0202 11:55:16.378692 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.819867 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-2phr4_f49a4fe2-aa60-4d14-a9bb-f13d0066a542/nmstate-console-plugin/0.log" Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.845197 4845 generic.go:334] "Generic (PLEG): container finished" podID="ebf2f253-531f-4835-84c1-928680352f7f" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" exitCode=0 Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.845241 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerDied","Data":"3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75"} Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.845285 4845 scope.go:117] "RemoveContainer" containerID="4667d56dc23295df8831dac2a0e953a5f6c9c92d45b3e4effcfbe305f29defc5" Feb 02 11:55:16 crc kubenswrapper[4845]: I0202 11:55:16.846115 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:55:16 crc kubenswrapper[4845]: E0202 11:55:16.846448 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:55:17 crc kubenswrapper[4845]: I0202 11:55:17.109986 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ks7bq_8c3ff69a-c422-491b-a933-0522f29d7e7c/nmstate-handler/0.log" Feb 02 11:55:17 crc kubenswrapper[4845]: I0202 11:55:17.172595 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ksh5c_65b8d7a7-4de6-4edc-b652-999572c3494a/kube-rbac-proxy/0.log" Feb 02 11:55:17 crc kubenswrapper[4845]: I0202 11:55:17.275902 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ksh5c_65b8d7a7-4de6-4edc-b652-999572c3494a/nmstate-metrics/0.log" Feb 02 11:55:17 crc kubenswrapper[4845]: I0202 11:55:17.359860 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-xpndf_17b0c917-994c-41bc-9fbf-6e9d86d65bca/nmstate-operator/0.log" Feb 02 11:55:17 crc kubenswrapper[4845]: I0202 11:55:17.472303 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-k2dv5_ed30c5ac-3449-4902-b948-34958198b224/nmstate-webhook/0.log" Feb 02 11:55:29 crc kubenswrapper[4845]: I0202 11:55:29.721718 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:55:29 crc kubenswrapper[4845]: E0202 11:55:29.722584 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:55:30 crc kubenswrapper[4845]: I0202 11:55:30.418332 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b659b8cd7-mwl8b_2988a5fa-2703-4a60-bcd6-dc81ceea7e1a/manager/0.log" Feb 02 11:55:30 crc kubenswrapper[4845]: I0202 11:55:30.441758 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b659b8cd7-mwl8b_2988a5fa-2703-4a60-bcd6-dc81ceea7e1a/kube-rbac-proxy/0.log" Feb 02 11:55:43 crc kubenswrapper[4845]: I0202 11:55:43.268763 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rmj27_308cfce2-8d47-45e6-9153-a8cd92a8758b/prometheus-operator/0.log" Feb 02 11:55:43 crc kubenswrapper[4845]: I0202 11:55:43.516164 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_d289096b-a35d-4a41-90a3-cab735629cc7/prometheus-operator-admission-webhook/0.log" Feb 02 11:55:43 crc kubenswrapper[4845]: I0202 11:55:43.543713 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413/prometheus-operator-admission-webhook/0.log" Feb 02 11:55:43 crc kubenswrapper[4845]: I0202 11:55:43.900990 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5wvdz_b75686c5-933f-4f8d-bf87-0229795baf12/operator/0.log" Feb 02 11:55:43 crc kubenswrapper[4845]: I0202 11:55:43.929783 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-8x2hl_0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b/observability-ui-dashboards/0.log" Feb 02 11:55:44 crc kubenswrapper[4845]: I0202 11:55:44.059254 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8hhqb_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec/perses-operator/0.log" Feb 02 11:55:44 crc kubenswrapper[4845]: I0202 11:55:44.717425 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:55:44 crc kubenswrapper[4845]: E0202 11:55:44.718204 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:55:57 crc kubenswrapper[4845]: I0202 11:55:57.713364 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:55:57 crc kubenswrapper[4845]: E0202 11:55:57.714048 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:56:00 crc kubenswrapper[4845]: I0202 11:56:00.820584 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-4pzr6_cb944758-f09b-4486-9f3b-4ef87b53246b/cluster-logging-operator/0.log" Feb 02 11:56:00 crc kubenswrapper[4845]: I0202 11:56:00.845372 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-bkwj8_54453df2-b815-42be-9542-aef7eed68aeb/collector/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.025158 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_1f889290-f739-444c-a278-254f68d9d886/loki-compactor/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.106658 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-847z7_4af06166-f541-44e7-8b4b-37e4f39a8729/loki-distributor/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.249685 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-cf45dcc8c-vr5gw_2b18d0a9-d2cc-4d0b-9ede-a78da13ac929/opa/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.256756 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-cf45dcc8c-vr5gw_2b18d0a9-d2cc-4d0b-9ede-a78da13ac929/gateway/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.446840 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-cf45dcc8c-wn9nt_1a4ec7d2-3bae-4f70-9a46-e90b067a0518/opa/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.457003 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-cf45dcc8c-wn9nt_1a4ec7d2-3bae-4f70-9a46-e90b067a0518/gateway/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.543362 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_10b5b71f-47de-4ca2-9133-254552173c73/loki-index-gateway/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.730869 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_2d889e99-8118-4f52-ab20-b69a55bec079/loki-ingester/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.743295 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-sbp94_796c275c-0c9b-4b2e-ba0f-7fbeb645028a/loki-querier/0.log" Feb 02 11:56:01 crc kubenswrapper[4845]: I0202 11:56:01.920457 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-dz4l8_27a684fe-6402-4a0d-ab7c-e5c4eab14a64/loki-query-frontend/0.log" Feb 02 11:56:12 crc kubenswrapper[4845]: I0202 11:56:12.712822 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:56:12 crc kubenswrapper[4845]: E0202 11:56:12.713907 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.107773 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pwcrt_64760ce4-85d6-4e58-aa77-99c1ca4d936e/kube-rbac-proxy/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.293342 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pwcrt_64760ce4-85d6-4e58-aa77-99c1ca4d936e/controller/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.403411 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-frr-files/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.575671 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-frr-files/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.597147 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-reloader/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.652436 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-metrics/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.677374 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-reloader/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.814311 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-frr-files/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.840268 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-reloader/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.862449 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-metrics/0.log" Feb 02 11:56:17 crc kubenswrapper[4845]: I0202 11:56:17.888927 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-metrics/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.085687 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-metrics/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.085755 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-reloader/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.104067 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/cp-frr-files/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.124632 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/controller/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.257999 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/frr-metrics/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.288640 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/kube-rbac-proxy/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.347501 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/kube-rbac-proxy-frr/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.580844 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/reloader/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.602811 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-hd78b_8e70dfea-db96-43f0-82ea-e9342326f82f/frr-k8s-webhook-server/0.log" Feb 02 11:56:18 crc kubenswrapper[4845]: I0202 11:56:18.895019 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-bcff8566-gkqml_71926ac8-4fc3-41de-8295-01c8ddbb9d27/manager/0.log" Feb 02 11:56:19 crc kubenswrapper[4845]: I0202 11:56:19.112456 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66c6bb874c-q55bn_d2f82fb6-ff9c-4578-8e8c-2bc454b09927/webhook-server/0.log" Feb 02 11:56:19 crc kubenswrapper[4845]: I0202 11:56:19.129870 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7gchc_ae8d6393-e53b-4acc-9a90-094d95e29c03/kube-rbac-proxy/0.log" Feb 02 11:56:19 crc kubenswrapper[4845]: I0202 11:56:19.693535 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bnlrj_572a79b7-a042-4090-afdd-924cdb0f9d3e/frr/0.log" Feb 02 11:56:19 crc kubenswrapper[4845]: I0202 11:56:19.944222 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7gchc_ae8d6393-e53b-4acc-9a90-094d95e29c03/speaker/0.log" Feb 02 11:56:25 crc kubenswrapper[4845]: I0202 11:56:25.713758 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:56:25 crc kubenswrapper[4845]: E0202 11:56:25.715801 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:56:34 crc kubenswrapper[4845]: I0202 11:56:34.105852 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz_2e68fd72-b961-4a58-9f54-01bd2f6ebd76/util/0.log" Feb 02 11:56:34 crc kubenswrapper[4845]: I0202 11:56:34.309768 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz_2e68fd72-b961-4a58-9f54-01bd2f6ebd76/util/0.log" Feb 02 11:56:34 crc kubenswrapper[4845]: I0202 11:56:34.318927 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz_2e68fd72-b961-4a58-9f54-01bd2f6ebd76/pull/0.log" Feb 02 11:56:34 crc kubenswrapper[4845]: I0202 11:56:34.356666 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz_2e68fd72-b961-4a58-9f54-01bd2f6ebd76/pull/0.log" Feb 02 11:56:34 crc kubenswrapper[4845]: I0202 11:56:34.533005 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz_2e68fd72-b961-4a58-9f54-01bd2f6ebd76/util/0.log" Feb 02 11:56:34 crc kubenswrapper[4845]: I0202 11:56:34.552149 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz_2e68fd72-b961-4a58-9f54-01bd2f6ebd76/extract/0.log" Feb 02 11:56:34 crc kubenswrapper[4845]: I0202 11:56:34.575499 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2v66lz_2e68fd72-b961-4a58-9f54-01bd2f6ebd76/pull/0.log" Feb 02 11:56:34 crc kubenswrapper[4845]: I0202 11:56:34.760020 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr_2d07b157-761c-4649-ace7-6b9e73636713/util/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.000533 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr_2d07b157-761c-4649-ace7-6b9e73636713/pull/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.040533 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr_2d07b157-761c-4649-ace7-6b9e73636713/util/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.064369 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr_2d07b157-761c-4649-ace7-6b9e73636713/pull/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.217423 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr_2d07b157-761c-4649-ace7-6b9e73636713/extract/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.217541 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr_2d07b157-761c-4649-ace7-6b9e73636713/pull/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.252879 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc58ffr_2d07b157-761c-4649-ace7-6b9e73636713/util/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.401714 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt_8ead5170-aa2d-4a22-a528-02edf1375239/util/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.595975 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt_8ead5170-aa2d-4a22-a528-02edf1375239/util/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.675067 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt_8ead5170-aa2d-4a22-a528-02edf1375239/pull/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.677345 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt_8ead5170-aa2d-4a22-a528-02edf1375239/pull/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.895262 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt_8ead5170-aa2d-4a22-a528-02edf1375239/extract/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.908732 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt_8ead5170-aa2d-4a22-a528-02edf1375239/pull/0.log" Feb 02 11:56:35 crc kubenswrapper[4845]: I0202 11:56:35.922571 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bc9bxt_8ead5170-aa2d-4a22-a528-02edf1375239/util/0.log" Feb 02 11:56:36 crc kubenswrapper[4845]: I0202 11:56:36.092540 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq_cbe2dd1b-0b96-4fb7-8873-f9c1378bde92/util/0.log" Feb 02 11:56:36 crc kubenswrapper[4845]: I0202 11:56:36.226577 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq_cbe2dd1b-0b96-4fb7-8873-f9c1378bde92/util/0.log" Feb 02 11:56:36 crc kubenswrapper[4845]: I0202 11:56:36.254878 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq_cbe2dd1b-0b96-4fb7-8873-f9c1378bde92/pull/0.log" Feb 02 11:56:36 crc kubenswrapper[4845]: I0202 11:56:36.291637 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq_cbe2dd1b-0b96-4fb7-8873-f9c1378bde92/pull/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.027472 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq_cbe2dd1b-0b96-4fb7-8873-f9c1378bde92/util/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.028738 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq_cbe2dd1b-0b96-4fb7-8873-f9c1378bde92/pull/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.112285 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713md5sq_cbe2dd1b-0b96-4fb7-8873-f9c1378bde92/extract/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.244036 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw_fe3cf6fe-df9c-4484-a6af-75fe0b5fa907/util/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.444665 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw_fe3cf6fe-df9c-4484-a6af-75fe0b5fa907/pull/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.456510 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw_fe3cf6fe-df9c-4484-a6af-75fe0b5fa907/pull/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.466164 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw_fe3cf6fe-df9c-4484-a6af-75fe0b5fa907/util/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.723541 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw_fe3cf6fe-df9c-4484-a6af-75fe0b5fa907/util/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.746717 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw_fe3cf6fe-df9c-4484-a6af-75fe0b5fa907/pull/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.768172 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p55sw_fe3cf6fe-df9c-4484-a6af-75fe0b5fa907/extract/0.log" Feb 02 11:56:37 crc kubenswrapper[4845]: I0202 11:56:37.925729 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-skmdg_7b143223-c383-4b6f-b221-c8908e9f93d9/extract-utilities/0.log" Feb 02 11:56:38 crc kubenswrapper[4845]: I0202 11:56:38.097638 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-skmdg_7b143223-c383-4b6f-b221-c8908e9f93d9/extract-content/0.log" Feb 02 11:56:38 crc kubenswrapper[4845]: I0202 11:56:38.097736 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-skmdg_7b143223-c383-4b6f-b221-c8908e9f93d9/extract-utilities/0.log" Feb 02 11:56:38 crc kubenswrapper[4845]: I0202 11:56:38.105560 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-skmdg_7b143223-c383-4b6f-b221-c8908e9f93d9/extract-content/0.log" Feb 02 11:56:38 crc kubenswrapper[4845]: I0202 11:56:38.329655 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-skmdg_7b143223-c383-4b6f-b221-c8908e9f93d9/extract-utilities/0.log" Feb 02 11:56:38 crc kubenswrapper[4845]: I0202 11:56:38.330283 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-skmdg_7b143223-c383-4b6f-b221-c8908e9f93d9/extract-content/0.log" Feb 02 11:56:38 crc kubenswrapper[4845]: I0202 11:56:38.479677 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k66k5_26334878-6884-4481-b360-96927a5dd3d6/extract-utilities/0.log" Feb 02 11:56:39 crc kubenswrapper[4845]: I0202 11:56:39.211763 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-skmdg_7b143223-c383-4b6f-b221-c8908e9f93d9/registry-server/0.log" Feb 02 11:56:39 crc kubenswrapper[4845]: I0202 11:56:39.234346 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k66k5_26334878-6884-4481-b360-96927a5dd3d6/extract-utilities/0.log" Feb 02 11:56:39 crc kubenswrapper[4845]: I0202 11:56:39.246998 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k66k5_26334878-6884-4481-b360-96927a5dd3d6/extract-content/0.log" Feb 02 11:56:39 crc kubenswrapper[4845]: I0202 11:56:39.282355 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k66k5_26334878-6884-4481-b360-96927a5dd3d6/extract-content/0.log" Feb 02 11:56:39 crc kubenswrapper[4845]: I0202 11:56:39.543881 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k66k5_26334878-6884-4481-b360-96927a5dd3d6/extract-content/0.log" Feb 02 11:56:39 crc kubenswrapper[4845]: I0202 11:56:39.568857 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k66k5_26334878-6884-4481-b360-96927a5dd3d6/extract-utilities/0.log" Feb 02 11:56:39 crc kubenswrapper[4845]: I0202 11:56:39.699681 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ms22s_9fc452cb-0731-44f6-aae8-bad730786d8a/marketplace-operator/0.log" Feb 02 11:56:39 crc kubenswrapper[4845]: I0202 11:56:39.852532 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5vfjf_ac22736d-1901-40bf-a17f-186de03c64bf/extract-utilities/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.052640 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5vfjf_ac22736d-1901-40bf-a17f-186de03c64bf/extract-utilities/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.087806 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5vfjf_ac22736d-1901-40bf-a17f-186de03c64bf/extract-content/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.118771 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5vfjf_ac22736d-1901-40bf-a17f-186de03c64bf/extract-content/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.364762 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5vfjf_ac22736d-1901-40bf-a17f-186de03c64bf/extract-utilities/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.390329 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k66k5_26334878-6884-4481-b360-96927a5dd3d6/registry-server/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.442459 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5vfjf_ac22736d-1901-40bf-a17f-186de03c64bf/extract-content/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.596035 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5vfjf_ac22736d-1901-40bf-a17f-186de03c64bf/registry-server/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.622516 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-575p8_5c039981-931c-408f-8185-4d22b3da04a3/extract-utilities/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.713460 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:56:40 crc kubenswrapper[4845]: E0202 11:56:40.713779 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.812216 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-575p8_5c039981-931c-408f-8185-4d22b3da04a3/extract-content/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.822109 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-575p8_5c039981-931c-408f-8185-4d22b3da04a3/extract-utilities/0.log" Feb 02 11:56:40 crc kubenswrapper[4845]: I0202 11:56:40.834876 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-575p8_5c039981-931c-408f-8185-4d22b3da04a3/extract-content/0.log" Feb 02 11:56:41 crc kubenswrapper[4845]: I0202 11:56:41.017586 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-575p8_5c039981-931c-408f-8185-4d22b3da04a3/extract-utilities/0.log" Feb 02 11:56:41 crc kubenswrapper[4845]: I0202 11:56:41.035405 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-575p8_5c039981-931c-408f-8185-4d22b3da04a3/extract-content/0.log" Feb 02 11:56:41 crc kubenswrapper[4845]: I0202 11:56:41.707869 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-575p8_5c039981-931c-408f-8185-4d22b3da04a3/registry-server/0.log" Feb 02 11:56:53 crc kubenswrapper[4845]: I0202 11:56:53.713228 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:56:53 crc kubenswrapper[4845]: E0202 11:56:53.714179 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:56:54 crc kubenswrapper[4845]: I0202 11:56:54.155624 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858f6bffb9-kx466_631a8bcf-3ed5-4bd7-8fcd-2f4a56f47413/prometheus-operator-admission-webhook/0.log" Feb 02 11:56:54 crc kubenswrapper[4845]: I0202 11:56:54.171354 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rmj27_308cfce2-8d47-45e6-9153-a8cd92a8758b/prometheus-operator/0.log" Feb 02 11:56:54 crc kubenswrapper[4845]: I0202 11:56:54.199025 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858f6bffb9-fpgch_d289096b-a35d-4a41-90a3-cab735629cc7/prometheus-operator-admission-webhook/0.log" Feb 02 11:56:54 crc kubenswrapper[4845]: I0202 11:56:54.349124 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-8x2hl_0bd36a22-def8-4ad5-b1b2-ac23ef1ea70b/observability-ui-dashboards/0.log" Feb 02 11:56:54 crc kubenswrapper[4845]: I0202 11:56:54.377700 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5wvdz_b75686c5-933f-4f8d-bf87-0229795baf12/operator/0.log" Feb 02 11:56:54 crc kubenswrapper[4845]: I0202 11:56:54.394713 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8hhqb_1b09e2d3-a6a3-49bd-91c0-f75b4957f2ec/perses-operator/0.log" Feb 02 11:57:04 crc kubenswrapper[4845]: I0202 11:57:04.712794 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:57:04 crc kubenswrapper[4845]: E0202 11:57:04.713541 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:57:08 crc kubenswrapper[4845]: I0202 11:57:08.933269 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b659b8cd7-mwl8b_2988a5fa-2703-4a60-bcd6-dc81ceea7e1a/manager/0.log" Feb 02 11:57:09 crc kubenswrapper[4845]: I0202 11:57:09.006202 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b659b8cd7-mwl8b_2988a5fa-2703-4a60-bcd6-dc81ceea7e1a/kube-rbac-proxy/0.log" Feb 02 11:57:18 crc kubenswrapper[4845]: I0202 11:57:18.712699 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:57:18 crc kubenswrapper[4845]: E0202 11:57:18.713504 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:57:30 crc kubenswrapper[4845]: I0202 11:57:30.713958 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:57:30 crc kubenswrapper[4845]: E0202 11:57:30.714645 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:57:45 crc kubenswrapper[4845]: I0202 11:57:45.714655 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:57:45 crc kubenswrapper[4845]: E0202 11:57:45.715374 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:57:56 crc kubenswrapper[4845]: I0202 11:57:56.713849 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:57:56 crc kubenswrapper[4845]: E0202 11:57:56.714677 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:58:09 crc kubenswrapper[4845]: I0202 11:58:09.720867 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:58:09 crc kubenswrapper[4845]: E0202 11:58:09.722192 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:58:21 crc kubenswrapper[4845]: I0202 11:58:21.716528 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:58:21 crc kubenswrapper[4845]: E0202 11:58:21.718416 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:58:33 crc kubenswrapper[4845]: I0202 11:58:33.712912 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:58:33 crc kubenswrapper[4845]: E0202 11:58:33.713750 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:58:45 crc kubenswrapper[4845]: I0202 11:58:45.712806 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:58:45 crc kubenswrapper[4845]: E0202 11:58:45.713735 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:58:56 crc kubenswrapper[4845]: I0202 11:58:56.713025 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:58:56 crc kubenswrapper[4845]: E0202 11:58:56.713766 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:58:59 crc kubenswrapper[4845]: I0202 11:58:59.208282 4845 generic.go:334] "Generic (PLEG): container finished" podID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerID="70dfdbb3165ccda62cfeb30152b80bca1a6f55a7313abd9ca43a514f2c7bab8e" exitCode=0 Feb 02 11:58:59 crc kubenswrapper[4845]: I0202 11:58:59.208415 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" event={"ID":"a0d660a9-36eb-4eee-a756-27847f623aa6","Type":"ContainerDied","Data":"70dfdbb3165ccda62cfeb30152b80bca1a6f55a7313abd9ca43a514f2c7bab8e"} Feb 02 11:58:59 crc kubenswrapper[4845]: I0202 11:58:59.209364 4845 scope.go:117] "RemoveContainer" containerID="70dfdbb3165ccda62cfeb30152b80bca1a6f55a7313abd9ca43a514f2c7bab8e" Feb 02 11:59:00 crc kubenswrapper[4845]: I0202 11:59:00.011033 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vx6jc_must-gather-x6jwm_a0d660a9-36eb-4eee-a756-27847f623aa6/gather/0.log" Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.120826 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vx6jc/must-gather-x6jwm"] Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.121800 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" podUID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerName="copy" containerID="cri-o://7aac41ff8df6e02efefc2ff06fc1a8b635ec6f7893e527494a9992604219ab7f" gracePeriod=2 Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.132070 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vx6jc/must-gather-x6jwm"] Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.310309 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vx6jc_must-gather-x6jwm_a0d660a9-36eb-4eee-a756-27847f623aa6/copy/0.log" Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.310981 4845 generic.go:334] "Generic (PLEG): container finished" podID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerID="7aac41ff8df6e02efefc2ff06fc1a8b635ec6f7893e527494a9992604219ab7f" exitCode=143 Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.720111 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vx6jc_must-gather-x6jwm_a0d660a9-36eb-4eee-a756-27847f623aa6/copy/0.log" Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.724797 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.800711 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqd9j\" (UniqueName: \"kubernetes.io/projected/a0d660a9-36eb-4eee-a756-27847f623aa6-kube-api-access-rqd9j\") pod \"a0d660a9-36eb-4eee-a756-27847f623aa6\" (UID: \"a0d660a9-36eb-4eee-a756-27847f623aa6\") " Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.800859 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a0d660a9-36eb-4eee-a756-27847f623aa6-must-gather-output\") pod \"a0d660a9-36eb-4eee-a756-27847f623aa6\" (UID: \"a0d660a9-36eb-4eee-a756-27847f623aa6\") " Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.817486 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d660a9-36eb-4eee-a756-27847f623aa6-kube-api-access-rqd9j" (OuterVolumeSpecName: "kube-api-access-rqd9j") pod "a0d660a9-36eb-4eee-a756-27847f623aa6" (UID: "a0d660a9-36eb-4eee-a756-27847f623aa6"). InnerVolumeSpecName "kube-api-access-rqd9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:59:08 crc kubenswrapper[4845]: I0202 11:59:08.905750 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqd9j\" (UniqueName: \"kubernetes.io/projected/a0d660a9-36eb-4eee-a756-27847f623aa6-kube-api-access-rqd9j\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:09 crc kubenswrapper[4845]: I0202 11:59:09.011620 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0d660a9-36eb-4eee-a756-27847f623aa6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a0d660a9-36eb-4eee-a756-27847f623aa6" (UID: "a0d660a9-36eb-4eee-a756-27847f623aa6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:09 crc kubenswrapper[4845]: I0202 11:59:09.111400 4845 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a0d660a9-36eb-4eee-a756-27847f623aa6-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:09 crc kubenswrapper[4845]: I0202 11:59:09.335008 4845 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vx6jc_must-gather-x6jwm_a0d660a9-36eb-4eee-a756-27847f623aa6/copy/0.log" Feb 02 11:59:09 crc kubenswrapper[4845]: I0202 11:59:09.338968 4845 scope.go:117] "RemoveContainer" containerID="7aac41ff8df6e02efefc2ff06fc1a8b635ec6f7893e527494a9992604219ab7f" Feb 02 11:59:09 crc kubenswrapper[4845]: I0202 11:59:09.339190 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vx6jc/must-gather-x6jwm" Feb 02 11:59:09 crc kubenswrapper[4845]: I0202 11:59:09.365346 4845 scope.go:117] "RemoveContainer" containerID="70dfdbb3165ccda62cfeb30152b80bca1a6f55a7313abd9ca43a514f2c7bab8e" Feb 02 11:59:09 crc kubenswrapper[4845]: I0202 11:59:09.731491 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d660a9-36eb-4eee-a756-27847f623aa6" path="/var/lib/kubelet/pods/a0d660a9-36eb-4eee-a756-27847f623aa6/volumes" Feb 02 11:59:09 crc kubenswrapper[4845]: I0202 11:59:09.742673 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:59:09 crc kubenswrapper[4845]: E0202 11:59:09.744239 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:59:21 crc kubenswrapper[4845]: I0202 11:59:21.735676 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:59:21 crc kubenswrapper[4845]: E0202 11:59:21.737409 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:59:32 crc kubenswrapper[4845]: I0202 11:59:32.713586 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:59:32 crc kubenswrapper[4845]: E0202 11:59:32.714645 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.655031 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7sp74"] Feb 02 11:59:36 crc kubenswrapper[4845]: E0202 11:59:36.656144 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerName="extract-content" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656165 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerName="extract-content" Feb 02 11:59:36 crc kubenswrapper[4845]: E0202 11:59:36.656175 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359cd7da-7e78-430b-90df-0924f27608c2" containerName="extract-utilities" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656183 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="359cd7da-7e78-430b-90df-0924f27608c2" containerName="extract-utilities" Feb 02 11:59:36 crc kubenswrapper[4845]: E0202 11:59:36.656196 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerName="extract-utilities" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656203 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerName="extract-utilities" Feb 02 11:59:36 crc kubenswrapper[4845]: E0202 11:59:36.656230 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerName="copy" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656237 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerName="copy" Feb 02 11:59:36 crc kubenswrapper[4845]: E0202 11:59:36.656257 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359cd7da-7e78-430b-90df-0924f27608c2" containerName="extract-content" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656264 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="359cd7da-7e78-430b-90df-0924f27608c2" containerName="extract-content" Feb 02 11:59:36 crc kubenswrapper[4845]: E0202 11:59:36.656275 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerName="gather" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656282 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerName="gather" Feb 02 11:59:36 crc kubenswrapper[4845]: E0202 11:59:36.656294 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359cd7da-7e78-430b-90df-0924f27608c2" containerName="registry-server" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656301 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="359cd7da-7e78-430b-90df-0924f27608c2" containerName="registry-server" Feb 02 11:59:36 crc kubenswrapper[4845]: E0202 11:59:36.656337 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerName="registry-server" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656345 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerName="registry-server" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656630 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d906e33-6090-4cdd-ab5a-749672c65f48" containerName="registry-server" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656648 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerName="gather" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656666 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d660a9-36eb-4eee-a756-27847f623aa6" containerName="copy" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.656681 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="359cd7da-7e78-430b-90df-0924f27608c2" containerName="registry-server" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.659170 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.683784 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sp74"] Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.700191 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr52z\" (UniqueName: \"kubernetes.io/projected/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-kube-api-access-fr52z\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.700477 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-utilities\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.700617 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-catalog-content\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.802841 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr52z\" (UniqueName: \"kubernetes.io/projected/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-kube-api-access-fr52z\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.802967 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-utilities\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.803013 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-catalog-content\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.803647 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-catalog-content\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.803931 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-utilities\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.832077 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr52z\" (UniqueName: \"kubernetes.io/projected/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-kube-api-access-fr52z\") pod \"redhat-marketplace-7sp74\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:36 crc kubenswrapper[4845]: I0202 11:59:36.988155 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:38 crc kubenswrapper[4845]: I0202 11:59:38.104788 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sp74"] Feb 02 11:59:38 crc kubenswrapper[4845]: I0202 11:59:38.657304 4845 generic.go:334] "Generic (PLEG): container finished" podID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerID="0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2" exitCode=0 Feb 02 11:59:38 crc kubenswrapper[4845]: I0202 11:59:38.657348 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sp74" event={"ID":"c2a84c41-3b2c-4c19-835c-e4c499d17fd6","Type":"ContainerDied","Data":"0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2"} Feb 02 11:59:38 crc kubenswrapper[4845]: I0202 11:59:38.657639 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sp74" event={"ID":"c2a84c41-3b2c-4c19-835c-e4c499d17fd6","Type":"ContainerStarted","Data":"ae0be742560de8e985f7d9c6a63dbcc6970fbeb39be0d5d9d0f0e94563fc6946"} Feb 02 11:59:38 crc kubenswrapper[4845]: I0202 11:59:38.659266 4845 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:59:40 crc kubenswrapper[4845]: I0202 11:59:40.685441 4845 generic.go:334] "Generic (PLEG): container finished" podID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerID="3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581" exitCode=0 Feb 02 11:59:40 crc kubenswrapper[4845]: I0202 11:59:40.685579 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sp74" event={"ID":"c2a84c41-3b2c-4c19-835c-e4c499d17fd6","Type":"ContainerDied","Data":"3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581"} Feb 02 11:59:41 crc kubenswrapper[4845]: I0202 11:59:41.707417 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sp74" event={"ID":"c2a84c41-3b2c-4c19-835c-e4c499d17fd6","Type":"ContainerStarted","Data":"aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b"} Feb 02 11:59:41 crc kubenswrapper[4845]: I0202 11:59:41.738988 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7sp74" podStartSLOduration=3.305581476 podStartE2EDuration="5.738967055s" podCreationTimestamp="2026-02-02 11:59:36 +0000 UTC" firstStartedPulling="2026-02-02 11:59:38.659006617 +0000 UTC m=+5259.750408087" lastFinishedPulling="2026-02-02 11:59:41.092392206 +0000 UTC m=+5262.183793666" observedRunningTime="2026-02-02 11:59:41.728201167 +0000 UTC m=+5262.819602617" watchObservedRunningTime="2026-02-02 11:59:41.738967055 +0000 UTC m=+5262.830368505" Feb 02 11:59:46 crc kubenswrapper[4845]: I0202 11:59:46.988612 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:46 crc kubenswrapper[4845]: I0202 11:59:46.996764 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:47 crc kubenswrapper[4845]: I0202 11:59:47.056455 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:47 crc kubenswrapper[4845]: I0202 11:59:47.713153 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:59:47 crc kubenswrapper[4845]: E0202 11:59:47.713692 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 11:59:47 crc kubenswrapper[4845]: I0202 11:59:47.857835 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:47 crc kubenswrapper[4845]: I0202 11:59:47.911785 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sp74"] Feb 02 11:59:49 crc kubenswrapper[4845]: I0202 11:59:49.814214 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7sp74" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerName="registry-server" containerID="cri-o://aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b" gracePeriod=2 Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.272749 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.388410 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-catalog-content\") pod \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.388930 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-utilities\") pod \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.388998 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr52z\" (UniqueName: \"kubernetes.io/projected/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-kube-api-access-fr52z\") pod \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\" (UID: \"c2a84c41-3b2c-4c19-835c-e4c499d17fd6\") " Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.390313 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-utilities" (OuterVolumeSpecName: "utilities") pod "c2a84c41-3b2c-4c19-835c-e4c499d17fd6" (UID: "c2a84c41-3b2c-4c19-835c-e4c499d17fd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.398782 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-kube-api-access-fr52z" (OuterVolumeSpecName: "kube-api-access-fr52z") pod "c2a84c41-3b2c-4c19-835c-e4c499d17fd6" (UID: "c2a84c41-3b2c-4c19-835c-e4c499d17fd6"). InnerVolumeSpecName "kube-api-access-fr52z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.411253 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2a84c41-3b2c-4c19-835c-e4c499d17fd6" (UID: "c2a84c41-3b2c-4c19-835c-e4c499d17fd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.492300 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.492340 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.492358 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr52z\" (UniqueName: \"kubernetes.io/projected/c2a84c41-3b2c-4c19-835c-e4c499d17fd6-kube-api-access-fr52z\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.828217 4845 generic.go:334] "Generic (PLEG): container finished" podID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerID="aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b" exitCode=0 Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.828304 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sp74" event={"ID":"c2a84c41-3b2c-4c19-835c-e4c499d17fd6","Type":"ContainerDied","Data":"aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b"} Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.828364 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sp74" event={"ID":"c2a84c41-3b2c-4c19-835c-e4c499d17fd6","Type":"ContainerDied","Data":"ae0be742560de8e985f7d9c6a63dbcc6970fbeb39be0d5d9d0f0e94563fc6946"} Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.828394 4845 scope.go:117] "RemoveContainer" containerID="aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.828343 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7sp74" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.850415 4845 scope.go:117] "RemoveContainer" containerID="3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581" Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.883467 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sp74"] Feb 02 11:59:50 crc kubenswrapper[4845]: I0202 11:59:50.896713 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sp74"] Feb 02 11:59:51 crc kubenswrapper[4845]: I0202 11:59:51.057055 4845 scope.go:117] "RemoveContainer" containerID="0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2" Feb 02 11:59:51 crc kubenswrapper[4845]: I0202 11:59:51.212699 4845 scope.go:117] "RemoveContainer" containerID="aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b" Feb 02 11:59:51 crc kubenswrapper[4845]: E0202 11:59:51.213286 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b\": container with ID starting with aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b not found: ID does not exist" containerID="aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b" Feb 02 11:59:51 crc kubenswrapper[4845]: I0202 11:59:51.213327 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b"} err="failed to get container status \"aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b\": rpc error: code = NotFound desc = could not find container \"aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b\": container with ID starting with aca0b5c0675ee6dc221c9a457393cdcd574af5330434a45f36dcedf8c37df25b not found: ID does not exist" Feb 02 11:59:51 crc kubenswrapper[4845]: I0202 11:59:51.213358 4845 scope.go:117] "RemoveContainer" containerID="3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581" Feb 02 11:59:51 crc kubenswrapper[4845]: E0202 11:59:51.213649 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581\": container with ID starting with 3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581 not found: ID does not exist" containerID="3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581" Feb 02 11:59:51 crc kubenswrapper[4845]: I0202 11:59:51.213694 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581"} err="failed to get container status \"3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581\": rpc error: code = NotFound desc = could not find container \"3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581\": container with ID starting with 3a8680cd74b69b425a58f79f2b7b7058593435c91a1fa6335d1f8e160ed99581 not found: ID does not exist" Feb 02 11:59:51 crc kubenswrapper[4845]: I0202 11:59:51.213719 4845 scope.go:117] "RemoveContainer" containerID="0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2" Feb 02 11:59:51 crc kubenswrapper[4845]: E0202 11:59:51.214036 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2\": container with ID starting with 0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2 not found: ID does not exist" containerID="0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2" Feb 02 11:59:51 crc kubenswrapper[4845]: I0202 11:59:51.214061 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2"} err="failed to get container status \"0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2\": rpc error: code = NotFound desc = could not find container \"0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2\": container with ID starting with 0b279424a3b9c4c386a448412312fe3bf0c33b7a092a4d78897a79bfeb62b2c2 not found: ID does not exist" Feb 02 11:59:51 crc kubenswrapper[4845]: I0202 11:59:51.731353 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" path="/var/lib/kubelet/pods/c2a84c41-3b2c-4c19-835c-e4c499d17fd6/volumes" Feb 02 11:59:59 crc kubenswrapper[4845]: I0202 11:59:59.723212 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 11:59:59 crc kubenswrapper[4845]: E0202 11:59:59.724586 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.163185 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr"] Feb 02 12:00:00 crc kubenswrapper[4845]: E0202 12:00:00.163978 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.164008 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4845]: E0202 12:00:00.164038 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.164049 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4845]: E0202 12:00:00.164094 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.164107 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.164428 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a84c41-3b2c-4c19-835c-e4c499d17fd6" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.165634 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.169954 4845 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.170255 4845 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.179173 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr"] Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.244337 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c522af1-b700-4069-82b9-0c84cb693b9e-config-volume\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.244727 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wktkr\" (UniqueName: \"kubernetes.io/projected/6c522af1-b700-4069-82b9-0c84cb693b9e-kube-api-access-wktkr\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.245106 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c522af1-b700-4069-82b9-0c84cb693b9e-secret-volume\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.346965 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c522af1-b700-4069-82b9-0c84cb693b9e-secret-volume\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.347091 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c522af1-b700-4069-82b9-0c84cb693b9e-config-volume\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.347115 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wktkr\" (UniqueName: \"kubernetes.io/projected/6c522af1-b700-4069-82b9-0c84cb693b9e-kube-api-access-wktkr\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.349271 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c522af1-b700-4069-82b9-0c84cb693b9e-config-volume\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.353732 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c522af1-b700-4069-82b9-0c84cb693b9e-secret-volume\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.365520 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wktkr\" (UniqueName: \"kubernetes.io/projected/6c522af1-b700-4069-82b9-0c84cb693b9e-kube-api-access-wktkr\") pod \"collect-profiles-29500560-fhdkr\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.504629 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:00 crc kubenswrapper[4845]: I0202 12:00:00.996578 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr"] Feb 02 12:00:01 crc kubenswrapper[4845]: I0202 12:00:01.950604 4845 generic.go:334] "Generic (PLEG): container finished" podID="6c522af1-b700-4069-82b9-0c84cb693b9e" containerID="a9a1bdd86fe1a1a40b5e9dda368e1bfc9c7b0c92c27d045a527e9d73e2c3b7f9" exitCode=0 Feb 02 12:00:01 crc kubenswrapper[4845]: I0202 12:00:01.950679 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" event={"ID":"6c522af1-b700-4069-82b9-0c84cb693b9e","Type":"ContainerDied","Data":"a9a1bdd86fe1a1a40b5e9dda368e1bfc9c7b0c92c27d045a527e9d73e2c3b7f9"} Feb 02 12:00:01 crc kubenswrapper[4845]: I0202 12:00:01.950944 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" event={"ID":"6c522af1-b700-4069-82b9-0c84cb693b9e","Type":"ContainerStarted","Data":"86c6f078dded756def1df3f35c0baebc412526d92b3ff644a7b9e797bd8e4719"} Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.359183 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.439772 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c522af1-b700-4069-82b9-0c84cb693b9e-config-volume\") pod \"6c522af1-b700-4069-82b9-0c84cb693b9e\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.439999 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wktkr\" (UniqueName: \"kubernetes.io/projected/6c522af1-b700-4069-82b9-0c84cb693b9e-kube-api-access-wktkr\") pod \"6c522af1-b700-4069-82b9-0c84cb693b9e\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.440126 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c522af1-b700-4069-82b9-0c84cb693b9e-secret-volume\") pod \"6c522af1-b700-4069-82b9-0c84cb693b9e\" (UID: \"6c522af1-b700-4069-82b9-0c84cb693b9e\") " Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.440805 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c522af1-b700-4069-82b9-0c84cb693b9e-config-volume" (OuterVolumeSpecName: "config-volume") pod "6c522af1-b700-4069-82b9-0c84cb693b9e" (UID: "6c522af1-b700-4069-82b9-0c84cb693b9e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.441856 4845 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c522af1-b700-4069-82b9-0c84cb693b9e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.448182 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c522af1-b700-4069-82b9-0c84cb693b9e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6c522af1-b700-4069-82b9-0c84cb693b9e" (UID: "6c522af1-b700-4069-82b9-0c84cb693b9e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.451463 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c522af1-b700-4069-82b9-0c84cb693b9e-kube-api-access-wktkr" (OuterVolumeSpecName: "kube-api-access-wktkr") pod "6c522af1-b700-4069-82b9-0c84cb693b9e" (UID: "6c522af1-b700-4069-82b9-0c84cb693b9e"). InnerVolumeSpecName "kube-api-access-wktkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.544253 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wktkr\" (UniqueName: \"kubernetes.io/projected/6c522af1-b700-4069-82b9-0c84cb693b9e-kube-api-access-wktkr\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.544434 4845 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c522af1-b700-4069-82b9-0c84cb693b9e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.973355 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" event={"ID":"6c522af1-b700-4069-82b9-0c84cb693b9e","Type":"ContainerDied","Data":"86c6f078dded756def1df3f35c0baebc412526d92b3ff644a7b9e797bd8e4719"} Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.973397 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86c6f078dded756def1df3f35c0baebc412526d92b3ff644a7b9e797bd8e4719" Feb 02 12:00:03 crc kubenswrapper[4845]: I0202 12:00:03.973679 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-fhdkr" Feb 02 12:00:04 crc kubenswrapper[4845]: I0202 12:00:04.436339 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn"] Feb 02 12:00:04 crc kubenswrapper[4845]: I0202 12:00:04.445641 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-lbbxn"] Feb 02 12:00:05 crc kubenswrapper[4845]: I0202 12:00:05.727903 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe7b56f-4954-457d-8bb8-a0a50096cfb9" path="/var/lib/kubelet/pods/dfe7b56f-4954-457d-8bb8-a0a50096cfb9/volumes" Feb 02 12:00:12 crc kubenswrapper[4845]: I0202 12:00:12.445605 4845 scope.go:117] "RemoveContainer" containerID="7f56eeae3b5e853cc5e5d1ab4c3fe6f56e0c913955d2cd95163f2033cd7e1417" Feb 02 12:00:14 crc kubenswrapper[4845]: I0202 12:00:14.713471 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 12:00:14 crc kubenswrapper[4845]: E0202 12:00:14.714162 4845 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wnn9_openshift-machine-config-operator(ebf2f253-531f-4835-84c1-928680352f7f)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" Feb 02 12:00:25 crc kubenswrapper[4845]: I0202 12:00:25.713642 4845 scope.go:117] "RemoveContainer" containerID="3a4a1d82ec8333e6a28c6362b40eb2696c27da7912949e706323c4e360c22f75" Feb 02 12:00:27 crc kubenswrapper[4845]: I0202 12:00:27.209319 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" event={"ID":"ebf2f253-531f-4835-84c1-928680352f7f","Type":"ContainerStarted","Data":"4b9bcae9c88976b116a4c6a1c0683c93187733cbf529aca20585d37c96e7fd95"} Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.164819 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500561-gzmjj"] Feb 02 12:01:00 crc kubenswrapper[4845]: E0202 12:01:00.166677 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c522af1-b700-4069-82b9-0c84cb693b9e" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.166712 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c522af1-b700-4069-82b9-0c84cb693b9e" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.167334 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c522af1-b700-4069-82b9-0c84cb693b9e" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.169341 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.177967 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500561-gzmjj"] Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.302741 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-fernet-keys\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.303208 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-combined-ca-bundle\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.303536 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-config-data\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.303675 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8dxk\" (UniqueName: \"kubernetes.io/projected/abddecef-869d-410a-9658-52b8eb816fd7-kube-api-access-h8dxk\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.406086 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-fernet-keys\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.406247 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-combined-ca-bundle\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.406359 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-config-data\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.406418 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8dxk\" (UniqueName: \"kubernetes.io/projected/abddecef-869d-410a-9658-52b8eb816fd7-kube-api-access-h8dxk\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.413051 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-config-data\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.413517 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-combined-ca-bundle\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.419080 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-fernet-keys\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.427223 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8dxk\" (UniqueName: \"kubernetes.io/projected/abddecef-869d-410a-9658-52b8eb816fd7-kube-api-access-h8dxk\") pod \"keystone-cron-29500561-gzmjj\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.496148 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:00 crc kubenswrapper[4845]: I0202 12:01:00.993447 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500561-gzmjj"] Feb 02 12:01:01 crc kubenswrapper[4845]: W0202 12:01:01.000594 4845 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabddecef_869d_410a_9658_52b8eb816fd7.slice/crio-41f7fb5ab2f958c2d29c01a42788d8c823094bd7f1c2de8ec61b848eb9995865 WatchSource:0}: Error finding container 41f7fb5ab2f958c2d29c01a42788d8c823094bd7f1c2de8ec61b848eb9995865: Status 404 returned error can't find the container with id 41f7fb5ab2f958c2d29c01a42788d8c823094bd7f1c2de8ec61b848eb9995865 Feb 02 12:01:01 crc kubenswrapper[4845]: I0202 12:01:01.539966 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-gzmjj" event={"ID":"abddecef-869d-410a-9658-52b8eb816fd7","Type":"ContainerStarted","Data":"446b47f81c8aef222c759e53b536fd0c7ed3763c3fb014aa5772153f55483f07"} Feb 02 12:01:01 crc kubenswrapper[4845]: I0202 12:01:01.540274 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-gzmjj" event={"ID":"abddecef-869d-410a-9658-52b8eb816fd7","Type":"ContainerStarted","Data":"41f7fb5ab2f958c2d29c01a42788d8c823094bd7f1c2de8ec61b848eb9995865"} Feb 02 12:01:01 crc kubenswrapper[4845]: I0202 12:01:01.562711 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500561-gzmjj" podStartSLOduration=1.562689113 podStartE2EDuration="1.562689113s" podCreationTimestamp="2026-02-02 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:01:01.556050163 +0000 UTC m=+5342.647451613" watchObservedRunningTime="2026-02-02 12:01:01.562689113 +0000 UTC m=+5342.654090563" Feb 02 12:01:05 crc kubenswrapper[4845]: I0202 12:01:05.581587 4845 generic.go:334] "Generic (PLEG): container finished" podID="abddecef-869d-410a-9658-52b8eb816fd7" containerID="446b47f81c8aef222c759e53b536fd0c7ed3763c3fb014aa5772153f55483f07" exitCode=0 Feb 02 12:01:05 crc kubenswrapper[4845]: I0202 12:01:05.581677 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-gzmjj" event={"ID":"abddecef-869d-410a-9658-52b8eb816fd7","Type":"ContainerDied","Data":"446b47f81c8aef222c759e53b536fd0c7ed3763c3fb014aa5772153f55483f07"} Feb 02 12:01:06 crc kubenswrapper[4845]: I0202 12:01:06.979713 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.076520 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8dxk\" (UniqueName: \"kubernetes.io/projected/abddecef-869d-410a-9658-52b8eb816fd7-kube-api-access-h8dxk\") pod \"abddecef-869d-410a-9658-52b8eb816fd7\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.076923 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-fernet-keys\") pod \"abddecef-869d-410a-9658-52b8eb816fd7\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.077072 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-config-data\") pod \"abddecef-869d-410a-9658-52b8eb816fd7\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.077420 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-combined-ca-bundle\") pod \"abddecef-869d-410a-9658-52b8eb816fd7\" (UID: \"abddecef-869d-410a-9658-52b8eb816fd7\") " Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.082675 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abddecef-869d-410a-9658-52b8eb816fd7-kube-api-access-h8dxk" (OuterVolumeSpecName: "kube-api-access-h8dxk") pod "abddecef-869d-410a-9658-52b8eb816fd7" (UID: "abddecef-869d-410a-9658-52b8eb816fd7"). InnerVolumeSpecName "kube-api-access-h8dxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.085722 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "abddecef-869d-410a-9658-52b8eb816fd7" (UID: "abddecef-869d-410a-9658-52b8eb816fd7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.109063 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abddecef-869d-410a-9658-52b8eb816fd7" (UID: "abddecef-869d-410a-9658-52b8eb816fd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.134452 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-config-data" (OuterVolumeSpecName: "config-data") pod "abddecef-869d-410a-9658-52b8eb816fd7" (UID: "abddecef-869d-410a-9658-52b8eb816fd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.181279 4845 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.181507 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8dxk\" (UniqueName: \"kubernetes.io/projected/abddecef-869d-410a-9658-52b8eb816fd7-kube-api-access-h8dxk\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.181595 4845 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.181647 4845 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abddecef-869d-410a-9658-52b8eb816fd7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.604051 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-gzmjj" event={"ID":"abddecef-869d-410a-9658-52b8eb816fd7","Type":"ContainerDied","Data":"41f7fb5ab2f958c2d29c01a42788d8c823094bd7f1c2de8ec61b848eb9995865"} Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.604096 4845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f7fb5ab2f958c2d29c01a42788d8c823094bd7f1c2de8ec61b848eb9995865" Feb 02 12:01:07 crc kubenswrapper[4845]: I0202 12:01:07.604418 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-gzmjj" Feb 02 12:02:46 crc kubenswrapper[4845]: I0202 12:02:46.237285 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:02:46 crc kubenswrapper[4845]: I0202 12:02:46.237880 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:03:16 crc kubenswrapper[4845]: I0202 12:03:16.237408 4845 patch_prober.go:28] interesting pod/machine-config-daemon-2wnn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:03:16 crc kubenswrapper[4845]: I0202 12:03:16.237931 4845 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wnn9" podUID="ebf2f253-531f-4835-84c1-928680352f7f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:03:24 crc kubenswrapper[4845]: I0202 12:03:24.952406 4845 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wxhjh"] Feb 02 12:03:24 crc kubenswrapper[4845]: E0202 12:03:24.956381 4845 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abddecef-869d-410a-9658-52b8eb816fd7" containerName="keystone-cron" Feb 02 12:03:24 crc kubenswrapper[4845]: I0202 12:03:24.956424 4845 state_mem.go:107] "Deleted CPUSet assignment" podUID="abddecef-869d-410a-9658-52b8eb816fd7" containerName="keystone-cron" Feb 02 12:03:24 crc kubenswrapper[4845]: I0202 12:03:24.956959 4845 memory_manager.go:354] "RemoveStaleState removing state" podUID="abddecef-869d-410a-9658-52b8eb816fd7" containerName="keystone-cron" Feb 02 12:03:24 crc kubenswrapper[4845]: I0202 12:03:24.975153 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:24 crc kubenswrapper[4845]: I0202 12:03:24.984565 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxhjh"] Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.037227 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6qf8\" (UniqueName: \"kubernetes.io/projected/0cce6e08-6894-471c-8db2-1846df2e73bb-kube-api-access-q6qf8\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.037433 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-catalog-content\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.037518 4845 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-utilities\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.140762 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-catalog-content\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.141256 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-utilities\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.141543 4845 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6qf8\" (UniqueName: \"kubernetes.io/projected/0cce6e08-6894-471c-8db2-1846df2e73bb-kube-api-access-q6qf8\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.141884 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-catalog-content\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.144147 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-utilities\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.166382 4845 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6qf8\" (UniqueName: \"kubernetes.io/projected/0cce6e08-6894-471c-8db2-1846df2e73bb-kube-api-access-q6qf8\") pod \"certified-operators-wxhjh\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:25 crc kubenswrapper[4845]: I0202 12:03:25.308622 4845 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:26 crc kubenswrapper[4845]: I0202 12:03:26.048311 4845 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxhjh"] Feb 02 12:03:26 crc kubenswrapper[4845]: I0202 12:03:26.097742 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjh" event={"ID":"0cce6e08-6894-471c-8db2-1846df2e73bb","Type":"ContainerStarted","Data":"58b4b61f11c41e7a9802231b94ce0f3773ae77afd46dde6a7945c82fbc472073"} Feb 02 12:03:27 crc kubenswrapper[4845]: I0202 12:03:27.112720 4845 generic.go:334] "Generic (PLEG): container finished" podID="0cce6e08-6894-471c-8db2-1846df2e73bb" containerID="9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c" exitCode=0 Feb 02 12:03:27 crc kubenswrapper[4845]: I0202 12:03:27.113016 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjh" event={"ID":"0cce6e08-6894-471c-8db2-1846df2e73bb","Type":"ContainerDied","Data":"9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c"} Feb 02 12:03:28 crc kubenswrapper[4845]: I0202 12:03:28.128280 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjh" event={"ID":"0cce6e08-6894-471c-8db2-1846df2e73bb","Type":"ContainerStarted","Data":"84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89"} Feb 02 12:03:30 crc kubenswrapper[4845]: I0202 12:03:30.150632 4845 generic.go:334] "Generic (PLEG): container finished" podID="0cce6e08-6894-471c-8db2-1846df2e73bb" containerID="84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89" exitCode=0 Feb 02 12:03:30 crc kubenswrapper[4845]: I0202 12:03:30.150684 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjh" event={"ID":"0cce6e08-6894-471c-8db2-1846df2e73bb","Type":"ContainerDied","Data":"84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89"} Feb 02 12:03:32 crc kubenswrapper[4845]: I0202 12:03:32.175630 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjh" event={"ID":"0cce6e08-6894-471c-8db2-1846df2e73bb","Type":"ContainerStarted","Data":"d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242"} Feb 02 12:03:32 crc kubenswrapper[4845]: I0202 12:03:32.209570 4845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wxhjh" podStartSLOduration=4.616602257 podStartE2EDuration="8.209545021s" podCreationTimestamp="2026-02-02 12:03:24 +0000 UTC" firstStartedPulling="2026-02-02 12:03:27.117503277 +0000 UTC m=+5488.208904727" lastFinishedPulling="2026-02-02 12:03:30.710446041 +0000 UTC m=+5491.801847491" observedRunningTime="2026-02-02 12:03:32.198955918 +0000 UTC m=+5493.290357378" watchObservedRunningTime="2026-02-02 12:03:32.209545021 +0000 UTC m=+5493.300946481" Feb 02 12:03:35 crc kubenswrapper[4845]: I0202 12:03:35.309243 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:35 crc kubenswrapper[4845]: I0202 12:03:35.309949 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:35 crc kubenswrapper[4845]: I0202 12:03:35.366209 4845 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:36 crc kubenswrapper[4845]: I0202 12:03:36.286373 4845 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:36 crc kubenswrapper[4845]: I0202 12:03:36.349875 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxhjh"] Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.250421 4845 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wxhjh" podUID="0cce6e08-6894-471c-8db2-1846df2e73bb" containerName="registry-server" containerID="cri-o://d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242" gracePeriod=2 Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.800941 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.818261 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-catalog-content\") pod \"0cce6e08-6894-471c-8db2-1846df2e73bb\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.818339 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-utilities\") pod \"0cce6e08-6894-471c-8db2-1846df2e73bb\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.818407 4845 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6qf8\" (UniqueName: \"kubernetes.io/projected/0cce6e08-6894-471c-8db2-1846df2e73bb-kube-api-access-q6qf8\") pod \"0cce6e08-6894-471c-8db2-1846df2e73bb\" (UID: \"0cce6e08-6894-471c-8db2-1846df2e73bb\") " Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.819910 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-utilities" (OuterVolumeSpecName: "utilities") pod "0cce6e08-6894-471c-8db2-1846df2e73bb" (UID: "0cce6e08-6894-471c-8db2-1846df2e73bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.841355 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cce6e08-6894-471c-8db2-1846df2e73bb-kube-api-access-q6qf8" (OuterVolumeSpecName: "kube-api-access-q6qf8") pod "0cce6e08-6894-471c-8db2-1846df2e73bb" (UID: "0cce6e08-6894-471c-8db2-1846df2e73bb"). InnerVolumeSpecName "kube-api-access-q6qf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.909863 4845 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cce6e08-6894-471c-8db2-1846df2e73bb" (UID: "0cce6e08-6894-471c-8db2-1846df2e73bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.921418 4845 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.921471 4845 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6qf8\" (UniqueName: \"kubernetes.io/projected/0cce6e08-6894-471c-8db2-1846df2e73bb-kube-api-access-q6qf8\") on node \"crc\" DevicePath \"\"" Feb 02 12:03:38 crc kubenswrapper[4845]: I0202 12:03:38.921483 4845 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cce6e08-6894-471c-8db2-1846df2e73bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.262696 4845 generic.go:334] "Generic (PLEG): container finished" podID="0cce6e08-6894-471c-8db2-1846df2e73bb" containerID="d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242" exitCode=0 Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.262762 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjh" event={"ID":"0cce6e08-6894-471c-8db2-1846df2e73bb","Type":"ContainerDied","Data":"d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242"} Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.262796 4845 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjh" event={"ID":"0cce6e08-6894-471c-8db2-1846df2e73bb","Type":"ContainerDied","Data":"58b4b61f11c41e7a9802231b94ce0f3773ae77afd46dde6a7945c82fbc472073"} Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.262817 4845 scope.go:117] "RemoveContainer" containerID="d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.263109 4845 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxhjh" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.286148 4845 scope.go:117] "RemoveContainer" containerID="84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.307568 4845 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxhjh"] Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.324750 4845 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wxhjh"] Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.324845 4845 scope.go:117] "RemoveContainer" containerID="9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.364134 4845 scope.go:117] "RemoveContainer" containerID="d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242" Feb 02 12:03:39 crc kubenswrapper[4845]: E0202 12:03:39.364592 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242\": container with ID starting with d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242 not found: ID does not exist" containerID="d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.364625 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242"} err="failed to get container status \"d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242\": rpc error: code = NotFound desc = could not find container \"d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242\": container with ID starting with d3ec9a9ced8824a1e5528138ddad98c41c5eb624391eeb46586edcaaca306242 not found: ID does not exist" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.364648 4845 scope.go:117] "RemoveContainer" containerID="84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89" Feb 02 12:03:39 crc kubenswrapper[4845]: E0202 12:03:39.365105 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89\": container with ID starting with 84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89 not found: ID does not exist" containerID="84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.365152 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89"} err="failed to get container status \"84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89\": rpc error: code = NotFound desc = could not find container \"84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89\": container with ID starting with 84fae2069a8b6dc0ad3bdf85e6f70858e60543e9ac9625a268afdf26aebdad89 not found: ID does not exist" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.365178 4845 scope.go:117] "RemoveContainer" containerID="9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c" Feb 02 12:03:39 crc kubenswrapper[4845]: E0202 12:03:39.365585 4845 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c\": container with ID starting with 9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c not found: ID does not exist" containerID="9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.365613 4845 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c"} err="failed to get container status \"9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c\": rpc error: code = NotFound desc = could not find container \"9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c\": container with ID starting with 9dc80a0800420b098a6aaded96b161dd9ad1323a5c391d8608dfbbc546392a2c not found: ID does not exist" Feb 02 12:03:39 crc kubenswrapper[4845]: I0202 12:03:39.726264 4845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cce6e08-6894-471c-8db2-1846df2e73bb" path="/var/lib/kubelet/pods/0cce6e08-6894-471c-8db2-1846df2e73bb/volumes"